When i found my network's output from pytorch model have different in trt model which input the same input data. what should i do?


output from trt model:
Unnamed Network 0
EngineBinding0-> (-1, 3, 224, 224) DataType.FLOAT
EngineBinding1-> (-1, 2) DataType.FLOAT
inputH0 : (3, 224, 224)
outputH0: (1, 2)
[[ 2.65625 -2.6523438]]
Succeeded running model in TensorRT!

output from pytorch model:

output1: torch.Size([1, 2])
tensor([[ 2.8536, -2.8530]], device=‘cuda:0’)

i don’t know why there is such error, is there have any one can explain it?


TensorRT Version:
GPU Type:
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered


Could you please give us more details.
How are you generating the TensorRT model, Are you following Pytorch → ONNX → TensorRT?
If yes could you please check ONNX model output matches to Pytorch model output ?

Thank you.