Description
We have a program that converts a onnx to a tensorrt engine and runs a prediction. This program works on a T1000 laptop gpu running in a docker container. But returns wrong output results on a Jetson Xavier NX. Besides the platform architecture, everything is more or less the same.
I also noticed that the engine on the Jetson Xavier NX is only half the size of that running on the laptop.
More info and steps to reproduce can be found down below.
Many thanks in advance
Environment
Host machine:
TensorRT Version: 8.5.2
GPU Type: NVIDIA QUADRO T1000
Nvidia Driver Version: 517
CUDA Version: 11.7
CUDNN Version: 11.8
Operating System + Version: Windows 11
Python Version (if applicable): /
TensorFlow Version (if applicable): /
PyTorch Version (if applicable): /
Baremetal or Container (if container which image + tag): Container (see dockerfile in github)
Jetson:
TensorRT Version: 8.4.1.5
GPU Type: Jetson Xavier NX
Nvidia Driver Version: ??
CUDA Version: 11.4
CUDNN Version: 8.4.1
Operating System + Version: Jetpack 5.2
Python Version (if applicable): /
TensorFlow Version (if applicable): /
PyTorch Version (if applicable): /
Baremetal or Container (if container which image + tag): baremetal
Relevant Files and Steps To Reproduce
See GitHub: https://github.com/SirMomster/tensorrt-recognition-jetson-reproduce