I can verify CUDA and CUDNN with “nvcc --version” and “dpkg -l | grep cudnn”. However, when I type “python3” and try “import tensorrt”, I have this error:
Traceback (most recent call last):
File “”, line 1, in
File “/usr/local/lib/python3.8/dist-packages/tensorrt/init.py”, line 67, in
from .tensorrt import *
ImportError: libnvdla_compiler.so: cannot open shared object file: No such file or directory
Could you advise? I made no changes to the container at all, shouldn’t it just work? I looked at this, but it doesn’t work for me. because this library does not exist:
No, that library is a low-level driver that gets mounted by the NVIDIA runtime. The L4T containers aren’t meant to run on x86, you would need a container built for x86 instead. For example if you were using l4t-pytorch on Jetson, I would recommend using the NGC pytorch container on x86.
I don’t believe that libnvdla_compiler.so is a package dependency of TensorRT, because it’s installed with the L4T low-level drivers under /usr/lib/aarch64-linux-gnu/tegra. IIRC you should be able to cross-compile applications without it. To actually run applications/containers that use TensorRT, it needs the real hardware. You can’t run those programs on x86 even under emulation because it needs access to the actual GPU on the Jetson.
JetPack 5.0.2 is the first version that an official cross-compilation container was provided for JetPack. For previous versions, you may need to create your own or do the cross-compiling outside of container. If you have further questions about cross-compilation, I would recommend opening a new topic about it as it’s not something that I personally do.