Mounting CUDA onto L4T docker image issues: libcurand.so.10: cannot open .. No such file or directory

Hi,

I have the L4T docker image from which I built PyTorch and other dependencies onto during the build phase. Inside the container with CUDA mounted, I installed pycuda (pip3 install pycuda). When I try to run my application I get the following error:

import pycuda.driver as cuda
File "/usr/local/lib/python3.6/dist-packages/pycuda/driver.py", line 62, in <module>
from pycuda._driver import * # noqa
ImportError: libcurand.so.10: cannot open shared object file: No such file or directory

At first my LD_LIBRARY_PATH variable was set to “/usr/local/cuda-10.2/targets/aarch64-linux/lib”. I’m assuming this is so because the host system this image was built on used cuda 10.2? I changed this path variable to “/usr/local/cuda-10.0/targets/aarch64-linux/lib” in the ~/.bashrc file then ran source ~/.bashrc.

In /usr/local/cuda-10.0/targets/aarch64-linux/lib (which is shared) I have a libcurand.so.10.0 file and libcurand.so.10.0.326 file, but not libcurand.so.10.

I’m extremely confused because this application I wrote works in the host system. It seems pycuda wants to use a file that does not exist in the shared cuda directory inside the L4T container. Is there something I am missing here?

EDIT:

Looks like nvcc --version in the image returns Cuda compilation tools, release 10.2, V10.2.89 where I have CUDA 10.0 installed on the host system. And now I’ve found that this image only officially supports CUDA 10.2.

Hi,

You can use JetPack4.4 for CUDA 10.2 and JetPack4.3 for CUDA 10.0.

There are some GPU dependency between OS and CUDA toolkit.
You will need to use the OS and CUDA toolkit from the same JetPack version…

JetPack 4.4 is our latest release and it can give you a better performance.
However, some package is not available for JetPack 4.4 yet that might limit your usage.

Thanks

Hello @AastaLLL,

I got the same problem on my Jetson Nano only that I run the following configuration:

  • JetPack v.4.4.1 (L4T 32.4.4)
  • Cuda v10.2.89 (output of nvcc -V from inside both Jetson and the Docker container)
  • PyTorch v1.6.0

I used the NVIDIA L4T PyTorch docker container provided by NVIDIA.

As the above solution is not valid for me (docker and host cuda are the same), what other solutions might I try?

Thanks in advance!

Hi christoph3,

Please help to open a new topic for your issue. Thanks