I’m using a Jetson Orin Nano Developer Kit with Jetpack Version 6.1+b123 installed. I created a new user “test_user” and installed a rootless docker engine (Version: 27.4.1) on this user following: Rootless mode | Docker Docs .
I’m trying to run a (custom) docker image based on l4t-jetpack:r36.4.0 (where I just install pytorch to test gpu availability) which fails with the following error:
“”"NvRmMemInitNvmap failed with Permission denied
356: Memory Manager Not supported
NvRmMemMgrInit failed error type: 196626
/usr/local/lib/python3.10/dist-packages/torch/cuda/init.py:129: UserWarning: CUDA initialization: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero. (Triggered internally at /opt/pytorch/pytorch/c10/cuda/CUDAFunctions.cpp:108.)
return torch._C._cuda_getDeviceCount() > 0
Traceback (most recent call last):
File “”, line 1, in “”"
I built the same image with the rootful docker engine where the container has access to the gpu without any problems…
I’m using the built-in nvidia-container-toolkit (Version: 1.14.2-1) and configured the runtime for rootless mode following: Installing the NVIDIA Container Toolkit — NVIDIA Container Toolkit 1.17.3 documentation . The user “test_user” is member of the sudo, video and i2c groups.
I read here: JetPack 6.3 containerd and kubernetes - #12 by AastaLLL that there are issues with Orin series regarding nvidia-container-cli and kubernetes, but the solution there didn’t help me.