I can verify CUDA and CUDNN with “nvcc --version” and “dpkg -l | grep cudnn”. However, when I type “python3” and try “import tensorrt”, I have this error:
Traceback (most recent call last):
File “”, line 1, in
File “/usr/local/lib/python3.8/dist-packages/tensorrt/init.py”, line 67, in
from .tensorrt import *
ImportError: libnvdla_compiler.so: cannot open shared object file: No such file or directory
Could you advise? I made no changes to the container at all, shouldn’t it just work? I looked at this, but it doesn’t work for me. because this library does not exist:
Hi @realimposter, libnvdla_compiler.so is a lower-level driver that gets mounted from the host device. When you started the container, did you run it with --runtime nvidia?
Do you have the file /usr/lib/aarch64-linux-gnu/tegra/libnvdla_compiler.so on your system?
Also, which version of JetPack-L4T are you running? If it’s JetPack 5, please try one of the newer l4t-tensorrt container tags.
I have checked /usr/lib/aarch64-linux-gnu/tegra/libnvdla_compiler.so and I indeed have seen it on my device (outside of the container, in the exact path you have described).
When I run with --runtime nvidia, I see docker: Error response from daemon: Unknown runtime specified nvidia. Any suggestions?
Hi @realimposter, sorry for the delay - you shouldn’t have to manually install other GPU drivers, they come with JetPack-L4T when you flash the device.
It sounds like the nvidia-container-runtime wasn’t installed when you flashed JetPack. Normally this gets installed by SDK Manager tool. You can try installing these packages from apt:
No, that library is a low-level driver that gets mounted by the NVIDIA runtime. The L4T containers aren’t meant to run on x86, you would need a container built for x86 instead. For example if you were using l4t-pytorch on Jetson, I would recommend using the NGC pytorch container on x86.
since libnvdla_comipler.so is the dependency of libnvinfer.so, which is the dependency of TensorRT,.
so if i want to build TensorRT with cross compiler, I need libnvdla_comipler.so.
I have seen the recently offical image JetPack Cross Compilation container | NVIDIA NGC, there is libnvinfer.so, but still not have libnvdla_comipler.so. this is weird, since it’s for cross compiler for jetpack, isn’t it?
I don’t believe that libnvdla_compiler.so is a package dependency of TensorRT, because it’s installed with the L4T low-level drivers under /usr/lib/aarch64-linux-gnu/tegra. IIRC you should be able to cross-compile applications without it. To actually run applications/containers that use TensorRT, it needs the real hardware. You can’t run those programs on x86 even under emulation because it needs access to the actual GPU on the Jetson.
JetPack 5.0.2 is the first version that an official cross-compilation container was provided for JetPack. For previous versions, you may need to create your own or do the cross-compiling outside of container. If you have further questions about cross-compilation, I would recommend opening a new topic about it as it’s not something that I personally do.