tensorRT output and Pytorch output are not same by FP32 inference(Classification model)

Hi.
After torchvision_models->onnx->TensorRT, I get an Index and Score output from model inference.
For matching the outputs respectively from TensorRT and from Pytorch model.eval(), I input an All-Ones data(white image) and get quite different index from two different platform. However, inputting an All-Zeros data the same index and scores can be got from two different platform.
I guess DenseLayer weights and bias is correctly convert at TensorRT platform.Why this happens? Is there something different in operating some operators? Can I fix it?
Any help will be appreciated.

torchvision pretrained model: resnet50, densenet161
TensorRT platform: Win10, C++, VS2017, TensorRT 5.1.5.0 GA
Pytorch platform:Win10, Python, with model.eval()
Hardware platform: RTX2080ti, CUDA10.1,cuDNN7.6.1

Help! How to fix it?

I’m having the exact same problem. The output of TensorRT on the Python API varies greatly from the PyTorch model output.

Using Jetson Nano with Jetpack 4.2. Resnet18 model from Torchvision converted to TensorRT with the torch2trt converter.

Any help with this problem would be much appreciated!

Hi ristoojala, Have you solved it? I also have the same problem.Thanks first!