Use Jetson AGX Orin’s GPU from Rootless Docker


We want to use Rootless Docker from a security perspective. However, when trying to access the GPU from Rootless Docker, it fails due to an error.

[Steps We tried]

First, stop the original Docker service

sudo systemctl stop docker.socket
sudo systemctl stop docker
sudo systemctl disable docker

Create a directory for Docker

sudo mkdir -p /mnt/docker
sudo chown -R name:name /mnt/docker

Install Rootless Docker

sudo apt install uidmap
curl -fsSL | sh
mkdir -p ~/.config/docker
vi ~/.config/docker/daemon.json
sudo vi /etc/nvidia-container-runtime/config.toml
systemctl --user enable docker
sudo loginctl enable-linger $(whoami)


    "data-root": "/mnt/docker",
    "default-runtime": "nvidia",
    "runtimes": {
        "nvidia": {
            "args": [],
            "path": "nvidia-container-runtime"

/etc/nvidia-container-runtime/config.toml (change no_cgroups=false to true ):

#accept-nvidia-visible-devices-as-volume-mounts = false
#accept-nvidia-visible-devices-envvar-when-unprivileged = true
disable-require = false
#swarm-resource = "DOCKER_RESOURCE_GPU"

#debug = "/var/log/nvidia-container-toolkit.log"
environment = []
#ldcache = "/etc/"
ldconfig = "@/sbin/ldconfig.real"
load-kmods = true
no-cgroups = true
#path = "/usr/bin/nvidia-container-cli"
#root = "/run/nvidia/driver"
#user = "root:video"

#debug = "/var/log/nvidia-container-runtime.log"
log-level = "info"
mode = "auto"
runtimes = ["docker-runc", "runc"]


mount-spec-path = "/etc/nvidia-container-runtime/host-files-for-container.d"

Creating a Docker container

docker run -itd --runtime nvidia --name gputest

The container is now created, but…

Executing commands inside the container

docker exec -it gputest bash

From here, inside the container

cd /usr/local/cuda-11.4/samples/1_Utilities/deviceQuery

↓Execution result:
./deviceQuery Starting…

CUDA Device Query (Runtime API) version (CUDART static linking)

NvRmMemInitNvmap failed with Permission denied
549: Memory Manager Not supported

NvRmMemInit failed error type: 196626

*** NvRmMemInit failed NvRmMemConstructor
cudaGetDeviceCount returned 801
→ operation not supported
Result = FAIL


Sorry for the late update. Please find below for more info:


Thanks for the reply.
I added below link and tried again, but same error occurred.
usermod -aG sudo,video,i2c “$USER”


Could you try to build it like below?


ARG USER=customuser
RUN useradd --create-home --user-group --groups sudo --shell /bin/bash "$USER"
RUN usermod -aG sudo,video,i2c "$USER"


1 Like

Thanks for the reply.
I built docker image from the Dockerfile you told me, but same error occurred.


We have tested it and it can work correctly.
Please try it again.

$ curl -fsSL | sh
$ sudo usermod -aG docker $USER
$ docker run -it --user customuser --rm --net=host --runtime nvidia -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix test
$ cd
$ /usr/local/cuda/bin/ .
$ cd NVIDIA_CUDA-11.4_Samples/1_Utilities/deviceQuery
$ make
$ ./deviceQuery
./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "Orin"
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.4, CUDA Runtime Version = 11.4, NumDevs = 1
Result = PASS


Thanks for the reply.

I’m trying the method you suggested, but it’s not working.
Our jetson already has rootless docker installed, so maybe something is configured differently.

When you check with the command below, does docker become rootless?
$docker info

1 Like

Is this still an issue to support? Any result can be shared?

We haven’t been able to solve the problem yet.
According to AastaLLL’s answer, it was worked, but it didn’t work when we tried it.
I want to use GPU from rootless docker. When AstaLLL’s tried it, could you please confirm whether it was really rootless docker?