I’m attempting to use CUDA/NPP in a Docker image derived from nvcr.io/nvidia/l4t-tensorrt:r8.5.2-runtime
. My code dynamically loads some CUDA/NPP libraries, say libcudart.so
or libnppig.so
. These libraries, or more precisely, relevant symlinks, seem to be missing from the installation:
$ ls -la /usr/local/cuda-11/lib64/libnppig*
lrwxrwxrwx 1 root root 22 Sep 14 2022 /usr/local/cuda-11/lib64/libnppig.so.11 -> libnppig.so.11.4.0.287
-rw-r--r-- 1 root root 35111568 Sep 14 2022 /usr/local/cuda-11/lib64/libnppig.so.11.4.0.287
$ ls -la /usr/local/cuda-11/lib64/libcudart*
lrwxrwxrwx 1 root root 21 Sep 13 2022 /usr/local/cuda-11/lib64/libcudart.so.11.0 -> libcudart.so.11.4.298
-rw-r--r-- 1 root root 699488 Sep 13 2022 /usr/local/cuda-11/lib64/libcudart.so.11.4.298
What is the expected way to reference libcudart.so
within that container? Is there a standard way to generate generic/versionless symlinks without having to do it by hand for each library?
Also, is there a reason that image is created without CUDA libs on LD_LIBRARY_PATH, or CUDA_HOME variable ?