Memory Manager Not supported Error while Running an AI application on Jatson Nano 8G

Hello,

I am trying to run the “nanoOWL” AI application on the Jetson Nano (8G) device with Jetpack 6.2. However, I am facing the following error while running the Jetson container.

Command:
jetson-containers run --workdir /opt/nanoowl $(autotag nanoowl)

Error:
V4L2_DEVICES:
/home/dsc/jetson-containers/run.sh: line 309: /tmp/nv_jetson_model: Permission denied
+ docker run --runtime nvidia -it --rm --network host --shm-size=8g --volume /tmp/argus_socket:/tmp/argus_socket --volume /etc/enctune.conf:/etc/enctune.conf --volume /etc/nv_tegra_release:/etc/nv_tegra_release --volume /tmp/nv_jetson_model:/tmp/nv_jetson_model --volume /var/run/dbus:/var/run/dbus --volume /var/run/avahi-daemon/socket:/var/run/avahi-daemon/socket --volume /var/run/docker.sock:/var/run/docker.sock --volume /home/dsc/jetson-containers/data:/data -v /etc/localtime:/etc/localtime:ro -v /etc/timezone:/etc/timezone:ro --device /dev/snd --device /dev/bus/usb --device /dev/i2c-0 --device /dev/i2c-1 --device /dev/i2c-2 --device /dev/i2c-4 --device /dev/i2c-5 --device /dev/i2c-7 --name jetson_container_20250228_181718 --workdir /opt/nanoowl dustynv/nanoowl:r36.4.0
docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running prestart hook #0: exit status 1, stdout: , stderr: Auto-detected mode as ‘legacy’
NvRmMemInitNvmap failed with Permission denied
356: Memory Manager Not supported

Jetson_Nano_8G_AI_App_Error_Logs.log (4.4 KB)

I am attaching full logs. Please help resolve the error.

Thank you,
Brijesh Thakkar

Hi,

The error indicates a permission issue.

Would you mind using a workspace with written permission instead?
For example:

$ jetson-containers run --workdir ${HOME}/nanoowl $(autotag nanoowl)

Thanks.

Hi,

Thank you for your replay.

I have tried changing the workspace as you suggested, but I am still getting the same issue.

Also, please note that when I tried to get GPU status using the nvidia-smi command, I got the below error. I think this could be related to the issue I am facing

$ nvidia-smi
Unable to determine the device handle for GPU0002:00:00.0: Unknown Error

I got blocked on this. it would be great if someone could share a few pointers on this. Thank you so much. Really appreciate your help.

Kindly share some pointers on how I should try to resolve this error. Thank you so much.

Hi,

Sorry for the late update.

Based on your log, it seems that there are some access issues inside the container.
Could you try to add $USER account into the docker group to see if it works?

sudo systemctl restart docker
sudo usermod -aG docker $USER
newgrp docker

Thansk.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.