****NvRmMemInit failed**** error type: 196626 NvRmMemInitNvmap failed with Permission denied

I am trying to run nvcr.io/nvidia/l4t-tensorrt:r8.4.1-runtime based Docker container image. I am trying to create a non-root user

# non-root user
ARG UNAME=ubuntu
ARG UID=1000
ARG GID=1000
RUN groupadd -g $GID -o $UNAME \
    && useradd -m -u $UID -g $GID -o -s /bin/bash $UNAME \
    && echo "$UNAME:recode" | chpasswd \
    && echo "root:recode" | chpasswd \
    && usermod -aG sudo,video,i2c $UNAME \
    && echo "$UNAME  ALL=(ALL:ALL) ALL" >> /etc/sudoers \
    && echo "$UNAME  ALL=(ALL:ALL) NOPASSWD: ALL" >> /etc/sudoers

When I run my deepstream applicaton, I see this error. Deepstream version is latest 6.2.
It looks like some kind of permission issue. I have already added $USER to video,i2c groups. Still no luck.

****NvRmMemInit failed**** error type: 196626


*** NvRmMemInit failed NvRmMemConstructor
nvbufsurftransform:cuInit failed : 801 
NvRmMemInitNvmap failed with Permission denied
549: Memory Manager Not supported

When I run my application with sudo or as root user, it works perfectly. What is wrong? How to run the deepstream application inside container as non-root user?

Hardware is Jetson AGX Xavier, Jetpack is latest 5.0.2+

$ cat /etc/nv_tegra_release
# R35 (release), REVISION: 1.0, GCID: 31346300, BOARD: t186ref, EABI: aarch64, DATE: Thu Aug 25 18:41:45 UTC 2022
$ sudo apt-cache show nvidia-jetpack
Package: nvidia-jetpack
Version: 5.0.2-b231
Architecture: arm64
Maintainer: NVIDIA Corporation
Installed-Size: 194
Depends: nvidia-jetpack-runtime (= 5.0.2-b231), nvidia-jetpack-dev (= 5.0.2-b231)
Homepage: http://developer.nvidia.com/jetson
Priority: standard
Section: metapackages
Filename: pool/main/n/nvidia-jetpack/nvidia-jetpack_5.0.2-b231_arm64.deb
Size: 29304
SHA256: b1268b2cb969e677163f291967bc7542371a29d536379df3f7dfa1f247ff3fab
SHA1: 7ff288a771b83eec8f80a41ccf0eec490f32e10a
MD5sum: 5cc57807b33630d8edb249e53daf58ed
Description: NVIDIA Jetpack Meta Package
Description-md5: ad1462289bdbc54909ae109d1d32c0a8

Moved to Xavier forum for better support.

Hi,

Could you please share how you launch the container as well?
Thanks.

I use a run script that does the following

#!/bin/bash
export UID_GID="$(id -u):$(id -u)" 
docker compose up -d l1 $@ && docker compose exec l1 /bin/bash

The container runtime is enabled and things work if I run as root user…
My compose file

version: '3.7'
services:
  l1:
    runtime: nvidia
    build:
      context: .
      dockerfile: Dockerfile${SUFFIX}
    image: kamerai/analytics
    volumes:
      - '../:/work'
      - '/var/run/docker.sock:/var/run/docker.sock:rw'
      - '/tmp/.X11-unix:/tmp/.X11-unix'
      - '$XAUTHORITY:/root/.Xauthority:ro'
      - '/etc/timezone:/etc/timezone:ro'
      - '/etc/localtime:/etc/localtime:ro'
      - '$VIDEO_DIR:/work/videos'
      - '$RAMDISK:/ramdisk'
    privileged: true
    user: "${UID_GID}"
    environment:
      - DISPLAY
      - XDG_RUNTIME_DIR
      - QT_X11_NO_MITSHM=1
      - CONTAINER=1
      - LC_ALL=C.UTF-8
      - QT_QPA_PLATFORM=xcb
      - GST_PLUGIN_PATH=/opt/nvidia/deepstream/deepstream/lib/gst-plugins/:/opt/nvidia/deepstream/deepstream/lib/:/work/src/app/plugins/lib/
      - LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu:/usr/local/lib:/opt/nvidia/deepstream/deepstream/lib/gst-plugins/:/opt/nvidia/deepstream/deepstream/lib/:/work/src/app/plugins/lib/:/opt/nvidia/deepstream/deepstream/lib/cvcore_libs/:/usr/local/cuda-11.6/targets/x86_64-linux/lib/
      - GST_DEBUG_DUMP_DOT_DIR=/tmp
      - USE_NEW_NVSTREAMMUX=yes
    env_file:
      - .env
    network_mode: host
    tty: true

My /etc/docker/config.json

cat /etc/docker/daemon.json 
{
    "data-root": "/mnt/ssd/docker",
    "default-runtime": "nvidia",
    "runtimes": {
        "nvidia": {
            "path": "/usr/bin/nvidia-container-runtime",
            "runtimeArgs": []
        }
    }
}

If I use the run.sh script as root user it passes 0:0 to the compose user parameter and I get a root user inside container. That user is able to use all deepstream apps without this error. When I run as regular user with 1000:1000 as user parameter, it throws the NvRmMemInit failed error.

Hi,

Do you run the container on the Jetson Xavier?
The compose file links to some x86 libraries which are not available on Jetson.

...
      - LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu:/usr/local/lib:/opt/nvidia/deepstream/deepstream/lib/gst-plugins/:/opt/nvidia/deepstream/deepstream/lib/:/work/src/app/plugins/lib/:/opt/nvidia/deepstream/deepstream/lib/cvcore_libs/:/usr/local/cuda-11.6/targets/x86_64-linux/lib/
..

Thanks

Thanks for pointing that out. I think I will fix it but that is not contributing this problem. The primary issue is that the executable runs if I run as root. Else it throws NvRmMemInit error.

Hi,

Could you share which version of docker compose you used and how do you install it?
We try to reproduce this issue but met the following compatibility issue.

$ sudo ./run.sh 
WARNING: The SUFFIX variable is not set. Defaulting to a blank string.
WARNING: The XAUTHORITY variable is not set. Defaulting to a blank string.
WARNING: The VIDEO_DIR variable is not set. Defaulting to a blank string.
WARNING: The RAMDISK variable is not set. Defaulting to a blank string.
ERROR: The Compose file './docker-compose.yaml' is invalid because:
Unsupported config option for services.l1: 'runtime'

Thanks.

I am using docker compose v2.0 which is bundled as a plugin and part of the docker installation.

$ docker --version
Docker version 19.03.12, build 48a66213fe
$ docker compose version
Docker Compose version v2.2.3

Thanks for the information.

We are testing internally.
Will share more information with you later

Hi,

Do you use a custom docker?
We test this with JetPack 5.0.2 and the default docker version should be:

$ docker --version
Docker version 20.10.12, build 20.10.12-0ubuntu2~20.04.1

Could you use docker 20.10.12 and try it again?
Thanks.

okay i will test and check

Hi,

We got a similar topic and the user has reported a fix.
Please check if it can also help on your issue as well.

Thanks.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.