Getting GPU docker passthrough working

I’m trying to get the containers running on my Jetson Xavier AGX to use the GPU.
I’ve followed these instructions and also these and I do see everything I should when validating:

$ sudo dpkg --get-selections | grep nvidia
libnvidia-container-tools                       install
libnvidia-container0:arm64                      install
libnvidia-container1:arm64                      install
nvidia-container-runtime                        install
nvidia-container-toolkit                        install
nvidia-docker2                                  install
nvidia-l4t-3d-core                              install
nvidia-l4t-apt-source                           install
nvidia-l4t-bootloader                           install
nvidia-l4t-camera                               install
nvidia-l4t-configs                              install
nvidia-l4t-core                                 install
nvidia-l4t-cuda                                 install
nvidia-l4t-display-kernel                       install
nvidia-l4t-firmware                             install
nvidia-l4t-gputools                             install
nvidia-l4t-graphics-demos                       install
nvidia-l4t-gstreamer                            install
nvidia-l4t-init                                 install
nvidia-l4t-initrd                               install
nvidia-l4t-jetson-io                            install
nvidia-l4t-jetson-multimedia-api                install
nvidia-l4t-jetsonpower-gui-tools                install
nvidia-l4t-kernel                               install
nvidia-l4t-kernel-dtbs                          install
nvidia-l4t-kernel-headers                       install
nvidia-l4t-libvulkan                            install
nvidia-l4t-multimedia                           install
nvidia-l4t-multimedia-utils                     install
nvidia-l4t-nvfancontrol                         install
nvidia-l4t-nvpmodel                             install
nvidia-l4t-nvpmodel-gui-tools                   install
nvidia-l4t-nvsci                                install
nvidia-l4t-oem-config                           install
nvidia-l4t-optee                                install
nvidia-l4t-pva                                  install
nvidia-l4t-tools                                install
nvidia-l4t-wayland                              install
nvidia-l4t-weston                               install
nvidia-l4t-x11                                  install
nvidia-l4t-xusb-firmware                        install

AND

$ sudo docker info | grep nvidia
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux nvidia runc

However, when I try to run some cuda base image:

sudo docker run --rm --gpus all nvidia/cuda:11.0.3-base-ubuntu20.04 nvidia-smi

Or via docker-compose:

services:
  test:
    image: nvidia/cuda:10.2-base
    command: nvidia-smi
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              capabilities: [gpu]

I get the following error:

docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'csv'
invoking the NVIDIA Container Runtime Hook directly (e.g. specifying the docker --gpus flag) is not supported. Please use the NVIDIA Container Runtime instead.: unknown.

Last thing to mention is the device is running without a screen attached (headless) in case it matters?

Any thoughts on how to pass the GPU to the docker containers?

Desktop uses different images compared to arm64. Make sure you using right one.

Also make sure jetpack has a good install - run sdk manager to reflash. Also make sure version you choose is compatible with version of jetpack.

Finally - be very careful running “apt update” because it will likely break docker.

edit: Oh yeah there is no “nvidia-smi” on arm64 - so what you are trying will never work.

Oh… so any idea about how I can validate whether the GPU passthrough in Docker is working (or not?)

I’m using it with a camera connected - so I just fire up gstreamer and see if encoder works.

There is other stuff that need more complicated testing tho, like inference models etc. Try to run the deepstream samples.

Read the docs again, and pay special attention to the different instructions given for “dgpu” vs “jetson”. DGPU means desktop systems and procedure is a little different.

In my experience getting gpu to “work” means nothing - the difficulty comes down to software compatibility. Must have clean version of jetpack - and must be running containers that are compatible.

The newer versions of jetpack come with the “nvidia container” stuff - the older versions didn’t. Easy fix is to just update jetpack rather than mess around with manually installing packages.

That’s exactly what I did.
I installed the latest jetpack which should have this built in.
Then used the docs validation commands to validate the proper packages are installed and they are…
The idea is to run Plex server which knows how to use the GPU for transcoding, however, only the CPU is used

Maybe it’s working fine already. Run bash instead of nvidia-smi inside the container then try to run gstreamer.

I keep getting the same error when adding the GPU device, docker_compose.yaml:

services:
  test:
    image: nvidia/cuda:10.2-base
    command: echo "hello"
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              capabilities: [gpu]

And output for docker-compose up:

Removing cuda_test_1
Recreating faedc76c10d5_cuda_test_1 ... error

ERROR: for faedc76c10d5_cuda_test_1  Cannot start service test: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'csv'
invoking the NVIDIA Container Runtime Hook directly (e.g. specifying the docker --gpus flag) is not supported. Please use the NVIDIA Container Runtime instead.: unknown

ERROR: for test  Cannot start service test: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'csv'
invoking the NVIDIA Container Runtime Hook directly (e.g. specifying the docker --gpus flag) is not supported. Please use the NVIDIA Container Runtime instead.: unknown
ERROR: Encountered errors while bringing up the project.

Follow example here: Your First Jetson Container | NVIDIA Developer

L4T are the containers that you want. NVIDIA L4T Base | NVIDIA NGC

CUDA works a bit differently on the jetson, and nvidia keep changing the way things work as well. Don’t follow same procedure as you do on desktop.

Looks like thats a non-tegra container you are trying, use the l4t version instead: NVIDIA L4T CUDA | NVIDIA NGC

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.