Tensorrt inference gives different results

Tensorrt inference gives different results in arm (Xavier AGX) and x86 devices

Tensorrt version 8.1

I have trained a yolov5 model and converted it to tensorrt engine for inference, but both devices give me different results. I tested the output and made sure NMS is done correctly. Then I took a look at the network output and found that it differs. In xavier it gives me very high confidence boxes which ends up with the 40 objects detected, but x86 device gives me 28 objects only.

My question is why is this happening ?
Some other observations which could be the reason…

I am using FP32 for inference just to make sure this is not happening because of the quantization.
When I create the engine though, I see this warnings

WARNING [1656455070.957732]  [TRT] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING [1656455070.957848]  [TRT] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped


TensorRT Version: 8.1
GPU Type:
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered


The INT64 warning is there to notify the user that we are downcasting any INT64 indices to INT32. Generally, this has no effect on the resulting ONNX model and inference.
We recommend you to please try on the latest TensorRT version 8.4 GA, and if you still face this issue, please share with us the issue repro ONNX model and script for better debugging.

Thank you.