Docker runtime to access the GPU Jetson NX

I’m using a customized BSP which uses L4T32.4.3, to flash the emmc on Jetson NX, attached to a customized board and having issues with the docker runtime to access the GPU with the standard L4T container: The d I’m having issues with the docker runtime to access the GPU with the standard L4T container: nvcr.io/nvidia/l4t-base:r32.5.0 The docker engine cannot access the device. I get the following error:
Docker engine cannot access the device. I get the following error:

Cannot start service : could not select device driver “” with capabilities: [[gpu]]

This exact same compose file works on a NX with an NVidia dev board. Please advise. is the Docker Image and NX image mismatch causing the issue?

This problem may be due to different JetPack versions:

Hi,

As mehmetdeniz mentioned, could you give nvcr.io/nvidia/l4t-base:r32.4.3 a try?
Thanks.

I had a similar thought previously but cannot run the following with multiple versions of L4T:

$ docker run --rm --gpus all -it nvcr.io/nvidia/l4t-base:r32.4.3
docker: Error response from daemon: could not select device driver “” with capabilities: [[gpu]].

The deprecated GPU support doesn’t work either:

$ docker run --rm --runtime nvidia -it nvcr.io/nvidia/l4t-base:r32.4.3
docker: Error response from daemon: Unknown runtime specified nvidia.

What should /etc/docker/daemon.json contain?

Should I be using other flags?

What else can I try?

I could not found enough solution about this problem in Jetson platform, but you can try this solution:

sudo systemctl daemon-reload
sudo systemctl restart docker

If you are getting that error, then the nvidia-docker runtime from JetPack appears not to have been properly installed. Did you install those parts of JetPack in your customized BSP?

Hi we have tried to install them after flashing the BSP

Is there a way to install the docker runtime without flashing the BSP image?

Normally it would get installed by SDK Manager. These are the Docker-related packages that get installed:

$ sudo dpkg-query -l | grep nvidia

ii  libnvidia-container-tools                     0.9.0~beta.1                                     arm64        NVIDIA container runtime library (command-line tools)
ii  libnvidia-container0:arm64                    0.9.0~beta.1                                     arm64        NVIDIA container runtime library
ii  nvidia-container-csv-cuda                     10.2.89-1                                        arm64        Jetpack CUDA CSV file
ii  nvidia-container-csv-cudnn                    8.0.0.180-1+cuda10.2                             arm64        Jetpack CUDNN CSV file
ii  nvidia-container-csv-tensorrt                 7.1.3.0-1+cuda10.2                               arm64        Jetpack TensorRT CSV file
ii  nvidia-container-csv-visionworks              1.6.0.501                                        arm64        Jetpack VisionWorks CSV file
ii  nvidia-container-runtime                      3.1.0-1                                          arm64        NVIDIA container runtime
ii  nvidia-container-toolkit                      1.0.1-1                                          arm64        NVIDIA container runtime hook
ii  nvidia-docker2  

So you could try installing those packages manually. This list above was gathered from JetPack 4.5.

@dusty_nv thanks for those details! For my case with L4T 32.4.3 I was able to get

docker run --rm --runtime nvidia -it nvcr.io/nvidia/l4t-base:r32.4.3

to run correctly after installing the following (I skipped cudnn and visionworks):

sudo apt install libnvidia-container-tools libnvidia-container0:arm64 nvidia-container-csv-cuda nvidia-container-csv-tensorrt nvidia-container-runtime nvidia-container-toolkit nvidia-docker2

Thank you!