Cuda library is not found in jetson-containers docker

Dear,

I tried to use pre-built docker from GitHub - dusty-nv/jetson-containers: Machine Learning Containers for NVIDIA Jetson and JetPack-L4T
nvcr.io/nvidia/l4t-pytorch:r32.7.1-pth1.9-py3

I can start the docker successfully, however the cuda related application cannot be executed, I checked the library in /usr/local/cuda, and found that no cuda runtime or shared libraries.

Could you help to check the reason, thanks!
image

Hi @yawei.yang, what was the command that you used to run the docker container? Did it use --runtime nvidia? Which version of JetPack-L4T are you running? (you can check this with cat /etc/nv_tegra_release)

Yes, I added --runtime nvidia, the entire command is “sudo docker run -it --rm --runtime nvidia --network host nvcr.io/nvidia/l4t-pytorch:r32.7.1-pth1.10-py3

The L4T version is 32.7.2

image

One adding information, I was trying to run this in the production version board which only has around 14GB emmc, so I just install the OS, without the entire L4T libraries.
Would that be a concern?

I cross checked on a earlier version on developer board, and can find the cuda libraryies.
nvcr.io/nvidia/l4t-pytorch:r32.5.0-pth1.7-py3

On JetPack 4.x, CUDA/cuDNN/TensorRT gets mounted into the container from the host device when --runtime nvidia is used, so those components need to be installed on the device in order for them to show up inside the containers. On JetPack 5.x, these packages are installed inside the containers themselves.

Thanks for the feedback.
Since we are using the the production board, which has limited storge in EMMC, we cannot install entire L4T packages.

Could you kindly help to point out which components/ deb packages in the L4T we should install to ensure that could be mounted also in the container? Thanks.

We have found that it’s these 4 packages that impacts:
sudo apt install cuda-toolkit-10-2

sudo dpkg -i nvidia-container-csv-cuda_10.2.460-1_arm64.deb

sudo dpkg -i libcudnn8_8.2.1.32-1+cuda10.2_arm64.deb

sudo dpkg -i nvidia-container-csv-cudnn_8.2.1.32-1+cuda10.2_arm64.deb

After install these, we can see correct cuda and cudnn libraries in the docker, post this formation just incase if anyone eles might meet the same issue. Thanks!

OK thanks @yawei.yang, glad that you were able to get it working. As you found, you only need to install the components that you need to use inside the container (and for PyTorch, that’s cuDNN).

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.