CUDA symlinks in Jetpack

I’m attempting to use CUDA/NPP in a Docker image derived from nvcr.io/nvidia/l4t-tensorrt:r8.5.2-runtime. My code dynamically loads some CUDA/NPP libraries, say libcudart.so or libnppig.so. These libraries, or more precisely, relevant symlinks, seem to be missing from the installation:

$ ls -la /usr/local/cuda-11/lib64/libnppig*
lrwxrwxrwx 1 root root       22 Sep 14  2022 /usr/local/cuda-11/lib64/libnppig.so.11 -> libnppig.so.11.4.0.287
-rw-r--r-- 1 root root 35111568 Sep 14  2022 /usr/local/cuda-11/lib64/libnppig.so.11.4.0.287

$ ls -la /usr/local/cuda-11/lib64/libcudart*
lrwxrwxrwx 1 root root     21 Sep 13  2022 /usr/local/cuda-11/lib64/libcudart.so.11.0 -> libcudart.so.11.4.298
-rw-r--r-- 1 root root 699488 Sep 13  2022 /usr/local/cuda-11/lib64/libcudart.so.11.4.298

What is the expected way to reference libcudart.so within that container? Is there a standard way to generate generic/versionless symlinks without having to do it by hand for each library?

Also, is there a reason that image is created without CUDA libs on LD_LIBRARY_PATH, or CUDA_HOME variable ?

Hi,

Do you use TX2?
The container you used is only compatible with JetPack 5 which doesn’t support TX2.

Thanks.

No, this was on Orin. We also use TX2, where the relevant libraries are being mapped from the host via docker runtime. With Jetpack 5.1 shift towards embedding all the libraries in the container, our preference was to derive from the nvidia’s base image – however ran into the problem described above.
We can manually fix it, of course – however I find it strange, and wonder what is the intended integration strategy for client software linking with jetpack dev kit, and running on an image derived from the stock one.

Hi,

This is a runtime-based container so it is not used for developing.
Have you tried the nvcr.io/nvidia/l4t-tensorrt:r8.5.2.2-devel?

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.