TRT model generation on DriveOrin fails

Please provide the following info (tick the boxes after creating this topic):
Software Version
DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
DRIVE OS 6.0.4 SDK
other

Target Operating System
Linux
QNX
other

Hardware Platform
DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
DRIVE AGX Orin Developer Kit (not sure its number)
other

SDK Manager Version
1.9.0.10816
other

Host Machine Version
native Ubuntu Linux 20.04 Host installed with SDK Manager
native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
other

Hello,

I’m trying to generate trt models on DriveOrin, however I run into a problem:

Traceback (most recent call last):
  File "traffic_light_net_trt.py", line 4, in <module>
    import torch
  File "/home/nvidia/.local/lib/python3.6/site-packages/torch/__init__.py", line 195, in <module>
    _load_global_deps()
  File "/home/nvidia/.local/lib/python3.6/site-packages/torch/__init__.py", line 148, in _load_global_deps
    ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL)
  File "/usr/lib/python3.6/ctypes/__init__.py", line 348, in __init__
    self._handle = _dlopen(self._name, mode)
OSError: libmpi_cxx.so.20: cannot open shared object file: No such file or directory

I have tried use this post: https://forums.developer.nvidia.com/t/pytorch-for-jetson/72048

It didn’t work as I’m failing to install necessary info.
As far as I know I have to generate a TRT model on a hardware I will run that model later on.

I want to generate the trt model on my host be able to run them on DriveOrin. Is it possible?

Thanks,

Dear @NikolayChernuha,
Yes. You can generate ONNX model on host and use it on target to convert to TRT model. We recommend to use host for development and target for deployment.

Thanks for the answer, however this solution isn’t good for the end product, as that means the the model will be generated anyway on the device and we’d like to avoid it.
Is there any way to generate the TRT model on host and only upload it to the device?

Thanks,

Dear @NikolayChernuha,
No. When you prepare TRT model on a GPU, it selects CUDA kernels which are optimal for that GPU. So it is advisable to prepare TRT model on target.

Dear @SivaRamaKrishnaNV
Do, as I understand, I can prepare a model on one Drive Orin it will work on another as well?

Dear @NikolayChernuha,
I can prepare a model on one Drive Orin it will work on another as well?

Yes

Thanks @SivaRamaKrishnaNV ,

What regarding to desktop GPUs?
If I for example generate a model for let’s say 3090, can I use it on other 3090’s or I might face some problems?
If yes, what are the problems I might face?

Thanks,

Dear @NikolayChernuha,
If GPU is same, then you can load the already generate TRT model and use. Please make sure you have same DRIVE OS version on both hosts.