Libnvdla_compiler.so error on nvidia jetson container

Hi, I downloaded nvcr.io/nvidia/l4t-tensorrt:r8.0.1-runtime from NVIDIA L4T TensorRT | NVIDIA NGC. I am running it on AGX Xavier.

I can verify CUDA and CUDNN with “nvcc --version” and “dpkg -l | grep cudnn”. However, when I type “python3” and try “import tensorrt”, I have this error:

Traceback (most recent call last):
File “”, line 1, in
File “/usr/local/lib/python3.8/dist-packages/tensorrt/init.py”, line 67, in
from .tensorrt import *
ImportError: libnvdla_compiler.so: cannot open shared object file: No such file or directory

Could you advise? I made no changes to the container at all, shouldn’t it just work? I looked at this, but it doesn’t work for me. because this library does not exist:

Hi @realimposter, libnvdla_compiler.so is a lower-level driver that gets mounted from the host device. When you started the container, did you run it with --runtime nvidia?

Do you have the file /usr/lib/aarch64-linux-gnu/tegra/libnvdla_compiler.so on your system?

Also, which version of JetPack-L4T are you running? If it’s JetPack 5, please try one of the newer l4t-tensorrt container tags.

Hi @dusty_nv

I am using Jetpack 4.6 V1, but do note that I am using a custom AGX Xavier provided by a vendor: MIC-730AI - AI Inference System based on NVIDIA® Jetson AGX Xavier™ - Advantech.

I have checked /usr/lib/aarch64-linux-gnu/tegra/libnvdla_compiler.so and I indeed have seen it on my device (outside of the container, in the exact path you have described).

When I run with --runtime nvidia, I see docker: Error response from daemon: Unknown runtime specified nvidia. Any suggestions?

Also, I am not sure if I have the nvidia drivers installed. Is there a way to verify? Like does having libnvdla_compiler.so mean I have a driver? Or do I have to install from something like Linux-aarch64 (ARM64) Display Driver | 525.53 | Linux aarch64 | NVIDIA

Hi @realimposter, sorry for the delay - you shouldn’t have to manually install other GPU drivers, they come with JetPack-L4T when you flash the device.

It sounds like the nvidia-container-runtime wasn’t installed when you flashed JetPack. Normally this gets installed by SDK Manager tool. You can try installing these packages from apt:

apt-cache search nvidia-container*
libnvidia-container-tools - NVIDIA container runtime library (command-line tools)
libnvidia-container0 - NVIDIA container runtime library
libnvidia-container1 - NVIDIA container runtime library
nvidia-container-csv-cuda - Jetpack CUDA CSV file
nvidia-container-csv-cudnn - Jetpack CUDNN CSV file
nvidia-container-csv-tensorrt - Jetpack TensorRT CSV file
nvidia-container-csv-visionworks - Jetpack VisionWorks CSV file
nvidia-container-runtime - NVIDIA container runtime
nvidia-container-toolkit - NVIDIA container runtime hook
nvidia-container - NVIDIA Container Meta Package

So container itself does not contain libnvdla_compiler.so? I also ran into this problem when the host is x86-64.

I was using a custom hardware for Jetson. I followed install instructions from the vendor and it worked.

No, that library is a low-level driver that gets mounted by the NVIDIA runtime. The L4T containers aren’t meant to run on x86, you would need a container built for x86 instead. For example if you were using l4t-pytorch on Jetson, I would recommend using the NGC pytorch container on x86.

Then what if I want to use cross compile to build the package which depends on TensorRT?

I haven’t cross-compiled containers, but you could see here: GitHub - NVIDIA/nvidia-docker: Build and run Docker containers leveraging NVIDIA GPUs

There are also some other forum topics about this - here is one:

since libnvdla_comipler.so is the dependency of libnvinfer.so, which is the dependency of TensorRT,.
so if i want to build TensorRT with cross compiler, I need libnvdla_comipler.so.

I have seen the recently offical image JetPack Cross Compilation container | NVIDIA NGC, there is libnvinfer.so, but still not have libnvdla_comipler.so. this is weird, since it’s for cross compiler for jetpack, isn’t it?

I don’t believe that libnvdla_compiler.so is a package dependency of TensorRT, because it’s installed with the L4T low-level drivers under /usr/lib/aarch64-linux-gnu/tegra. IIRC you should be able to cross-compile applications without it. To actually run applications/containers that use TensorRT, it needs the real hardware. You can’t run those programs on x86 even under emulation because it needs access to the actual GPU on the Jetson.

1 Like

maybe that was my mistake, i just tried to run trtexec and it saied it required libnvdla_compiler.so.

You reached the point that I can’t run cuda-based program on x86-64 host.

by the way, what if I want to do cross compiler for a cetrain jetpack version? Since the container you provided is only for 5.0.2 (JetPack Cross Compilation container | NVIDIA NGC).

JetPack 5.0.2 is the first version that an official cross-compilation container was provided for JetPack. For previous versions, you may need to create your own or do the cross-compiling outside of container. If you have further questions about cross-compilation, I would recommend opening a new topic about it as it’s not something that I personally do.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.