I can start the docker successfully, however the cuda related application cannot be executed, I checked the library in /usr/local/cuda, and found that no cuda runtime or shared libraries.
Hi @yawei.yang, what was the command that you used to run the docker container? Did it use --runtime nvidia? Which version of JetPack-L4T are you running? (you can check this with cat /etc/nv_tegra_release)
One adding information, I was trying to run this in the production version board which only has around 14GB emmc, so I just install the OS, without the entire L4T libraries.
Would that be a concern?
On JetPack 4.x, CUDA/cuDNN/TensorRT gets mounted into the container from the host device when --runtime nvidia is used, so those components need to be installed on the device in order for them to show up inside the containers. On JetPack 5.x, these packages are installed inside the containers themselves.
Thanks for the feedback.
Since we are using the the production board, which has limited storge in EMMC, we cannot install entire L4T packages.
Could you kindly help to point out which components/ deb packages in the L4T we should install to ensure that could be mounted also in the container? Thanks.
After install these, we can see correct cuda and cudnn libraries in the docker, post this formation just incase if anyone eles might meet the same issue. Thanks!
OK thanks @yawei.yang, glad that you were able to get it working. As you found, you only need to install the components that you need to use inside the container (and for PyTorch, that’s cuDNN).