[JetPack 6.0 container] ImportError: libnvcudla.so: cannot open shared object file: No such file or directory

Got JetPack 6.0 container from: NVIDIA L4T JetPack | NVIDIA NGC
Followed Installing the NVIDIA Container Toolkit — NVIDIA Container Toolkit 1.14.4 documentation to set up nvidia-container-runtime through nvidia-container-toolkit.
I’m on a Jetson AGX Orin.

When I run the container and then try to import tensorrt in Python I get an error shown below. I checked versions of things and they look correct (CUDA 12.2, TensorRT 8.6, Python 3.10). I don’t know how to check cuDNN though. Also PyTorch doesn’t come installed? I need PyTorch as well so it is weird the container doesn’t have it. Anyways I’ve tried to debug this error by adding to LD_LIBRARY_PATH but it doesn’t work. Any ideas? Thanks.

Summary
sudo docker run -it --rm --runtime nvidia nvcr.io/nvidia/l4t-jetpack:r36.2.0
root@1336a2c29535:/# python3 -c "import tensorrt"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/usr/lib/python3.10/dist-packages/tensorrt/__init__.py", line 67, in <module>
    from .tensorrt import *
ImportError: libnvcudla.so: cannot open shared object file: No such file or directory

Hi @alsozatch, do you have CSV files under /etc/nvidia-container-runtime/host-files-for-container.d/ ? These are responsible for mounting low-level drivers (such as libnvcudla.so) into the containers when --runtime nvidia is used, and are typically installed along with the nvidia-container metapackage from apt. SDK Manager also sets up your Jetson with nvidia-docker, although I’m not sure if those other instructions do (typically for x86)

JetPack doesn’t come with PyTorch pre-installed, so it’s not in the l4t-jetpack container. There are other containers available that have it though:

I have only l4t.csv in that directory. It has both of the following.

Summary
lib, /usr/lib/aarch64-linux-gnu/tegra/libnvdla_compiler.so
lib, /usr/lib/aarch64-linux-gnu/tegra/libnvdla_runtime.so

But no libnvcudla.so. I also don’t have any /usr/lib/aarch64-linux-gnu/tegra/libnvcudla.so. When I installed nvidia-container-toolkit I used sudo apt-get install nvidia-container-toolkit=1.14.4-1 nvidia-container-toolkit-base=1.14.4-1 libnvidia-container-tools=1.14.4-1 libnvidia-container1=1.14.4-1

When I install without specifying version it installed 1.15.0~rc.2-1 which still does not get libnvcudla.so and also breaks docker run so I go back to 1.14.4 so it actually can run the container at least.

Does dustynv/pytorch:2.1-r36.2.0 (2023-12-14, 7.2GB) have JetPack 6.0 as well? So I can just use that instead of the one I’m trying now and have both JetPack 6.0 and PyTorch?

And thanks for the quick reply

Yes, there is also dustynv/l4t-pytorch:r36.2.0 which has torchvision, torchaudio, and some other PyTorch-related libraries in it in addition to PyTorch.

Do you have these drivers on your device (outside container) ?

$ ls -ll /usr/lib/aarch64-linux-gnu/tegra/libnvdla*
-rw-r--r-- 1 root root 8138688 Nov 30 13:57 /usr/lib/aarch64-linux-gnu/tegra/libnvdla_compiler.so
-rw-r--r-- 1 root root 6499168 Nov 30 13:57 /usr/lib/aarch64-linux-gnu/tegra/libnvdla_runtime.so

If so, hopefully my r36.2.0 containers from dustynv dockerhub that we were discussing above will work for you instead.

Awesome, dustynv/l4t-pytorch:r36.2.0 works for me. I do have those two drivers on my device. Thanks!

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.