The same model produces different results in TensorRT8 and TensorRT7

I used trtexec tool to convert the same onnx file into a tensorrt engine on a jetson xavier (jetpack 4.6) and a jetson orin (jetpack 5.0.1).

And then used the same input to do inference.

Results are quite different.

I have uploaded the onnx file and test.bin file to reproduce the issue. I have also uploaded result files that I got.

It would be great if someone can explain why this is happening.
orin.json (2.8 MB)
xavier.json (2.8 MB)
test.bin (10.5 MB)
tl-lite0-1101-b1.onnx (87.9 MB)

Hi,
Please refer to the below link for Sample guide.

Refer to the installation steps from the link if in case you are missing on anything

However suggested approach is to use TRT NGC containers to avoid any system dependency related issues.

In order to run python sample, make sure TRT python packages are installed while using NGC container.
/opt/tensorrt/python/python_setup.sh

In case, if you are trying to run custom model, please share your model and script with us, so that we can assist you better.
Thanks!

I believe you can reproduce the issue using command line tool trtexec on same jetsons.

Hi,

We are moving this post to the Jetson Xavier forum to get better help.

Thank you.

Hi,

Thanks for reporting this issue.

We are going to give it a try.
Will share more information with you later.

Thanks.

Hi,

We test the input and model with TensorRT 8.4+Xavier and the outputs are different from yours.
Is all the input data 3f800000?

Could you replace it with a real image and share it with us?

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.