How to use same tensor rt version of Jetson orin nano in desktop PC environment

As I know, Using converted tensor rt model is only support same tensor rt environment. (ex. If I converted the model by using tensor rt in Jetson orin nano system, This converted tensor rt model only can run in jetson orin nano system.) I just checked this issue, when I converted model by using tensor rt in Jetson orin nano system, this converted model can run/inference on another Jetson orin nano system.
But I just wanna deploy tensor rt model more easy and comfortable. This process is not easy to anyone and uncomfortable. I wanna convert the model by using tensor rt on desktop PC environment. So I tried, but some problem is occurred. Belows are some problems when I convert the tensor rt model on desktop PC environment and apply on Jetson orin nano system.

  1. It is impossible to match the versions of PC and onboard (jetson orin nano) environment. How can I install same version of tensor rt, cuDNN, CUDA, etc packages?
  • As I know, Converting the model by using tensor RT has huge dependancy of packages version such as cuDNN, CUDA etc. I was installed Jetpack 6.0 to Jetson orin nano board by using Nvidia SDK program, this Jetpack has default version of packages. (Tensor RT: 8.6.2.3, cuDNN: 8.9.4, CUDA: 12.2). I searched the docker image which has same package version of onboard system in official website by Nvidia docker (Container Release Notes - NVIDIA Docs), but I can not find same version of onboard system.
  1. So I select the some docker images with most similar version of onboard system. (docker image version : 23.06 to 24.02) I adapt these docker images on my desktop pc, convert the model by using tensor rt. After finish these process, I load the tensor rt converted model (onnx & trt file) on Jetson orin nano system and run inference, It returns error like this. ‘The engine plan file is not compatible with this version of Tensor RT, expecting library version 8.6.2.3 got 8.6.1.6, please rebuild.’

So I just wanna know the way for apply the tensor rt model which converted on desktop pc environment on onboard system such as Jetson orin nano. How should I do to solve this issue? Please help me.

Hi,

1. If you go to the library download package, you can download the same version for Jetson and x86 environment:

For example, TensorRT 10.7:

2. We have provided a container image for cross-compiling. Please check:

3. Currently, we don’t support compiling a Jetson engine on a desktop environment as TensorRT picks algorithms based on the hardware resources.
So please convert the engine on the target directly.

Thanks.

1 Like

Thanks for reply.
So you mean If I want to use onnx/trt file on jetson orin nano, It should be compile and convert on the same board and environment, It is not possible using onnx/trt converted file on desktop into jetson orin nano board because of hardware dependancy. Am I understand right?

Hi,

Yes, please find more details in our document below:
https://docs.nvidia.com/deeplearning/tensorrt/latest/inference-library/advanced.html#hardware-compatibility

There is a related feature named “Hardware Compatibility”.
But it is not available on the Jetson platform so building a Jetson engine on the desktop environment is not supported.

Thanks.

1 Like