How to solve docker : error response from daemon

Hello, I acquired a jetson nano (2gb) a little while ago and tried to do the configuration proposed by nvidia, when running docker (./docker_dli_run.sh) I get this error message : docker :

Error response from daemon : OCI runtime create failed : container_linux. go:349 : starting container process caused "process_linux.go:449 : container init caused “process_linux.go:432 : running prestart hook 0 caused “error running hook : exit status 1, stdout : , stderr : exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig.real --device=all --compute --compat32 --graphics --utility --video --display --pid=12340 /var/lib/docker/overlay2/70217fbf8e9d4fd1f685cdfa70c4bca1a08c4e6561db5386bac188b288669533/merged]\\nnvidia-container-cli : mount error: failed to create file: /var/lib/docker/overlay2/70217fbf8e9d4fd1f685cdfa70c4bca1a08c4e6561db5386bac188b288669533/merged/usr/lib/aarch64-linux-gnu/libnvidia-fatbinaryloader. so.440 .18: file exists\\n\”"”: unknow

I don’t understand why it’s returning this if anyone can help me

Hi,

Please noted that you will need to setup the Nano with the same JetPack version as the container.
For example, JetPack 4.5 (r32.5.0) is required for dli-nano-ai:v2.0.1-r32.5.0.

Thanks.

Thanks for the answer but how do I find out which version I should install?

I had the same problem and the documentation on the getting started course is not completely clear about the tag that should be chosen.

You can find the version of your JetPack using the following command:

dpkg-query

Your output will look like:

->nvidia-l4t-core 32.5.1-20210219084526

After you should be able to run the container:

docker run --runtime nvidia -it --rm --network host --volume ~/nvdli-data:/nvdli-nano/data --device /dev/video0 nvcr.io/nvidia/dli/dli-nano-ai:v2.0.1-r32.5.0

Note the <tag> v2.0.1-r32.5.0 makes reference to the core version running on the Jetson.

Thanks to @dusty_nv for sharing the command to get the JetPack version!

I’m getting this error now, could it be because i’m not using and not connecting a camera?
and i ran docker run --runtime nvidia -it --rm --network host --volume ~/nvdli-data:/nvdli-nano/data --device /dev/video0 nvcr.io/nvidia/dli/dli-nano-ai:v2.0.1-r32.5.0 but it did not resolve the error.

and to finish wich argument should i put to this command dpkg-query to get the version of nvidia?

if someone can help me please to resolve my problem and so to continue the tutorial of nvidia.

Indeed, the code I shared is to run the container with a USB camera connected to Jetson. If you are using a CSI camera, the command is different.

You can check better the details on the official docs:

https://developer.nvidia.com/embedded/learn/tutorials/jetson-container

@AastaLLL , I finally figured what you were saying and was able to make things work. For the poor souls that ended up here, I want to expand on what you said and share the steps I took to make it work

  1. Found out what version JetPack I have on my Nano 2 GB by using the following command ‘sudo apt-cache show nvidia-jetpack’.
  2. Clicked on the link you provided and scroll down to the “Run the Container” section where there is a table showing which Container Tag to use for the JetPack version I have.
  3. Modified the docker_dli_run.sh with the appropriate tag from step 2.
  4. Now, when I ran the script, I did not have the error anymore and was able to be inside the container.
    Hope this will work for others too.
1 Like

Good to know the issue is fixed.
Also thanks to summarize the detailed steps here.

could you tell me which version I should put as a tag when I see the jetson version

Package: nvidia-jetpack
Version: 4.5.1-b17
Architecture: arm64
Maintainer: NVIDIA Corporation
Installed-Size: 194
Depends: nvidia-cuda (= 4.5.1-b17), nvidia-opencv (= 4.5.1-b17), nvidia-cudnn8 (= 4.5.1-b17), nvidia-tensorrt (= 4.5.1-b17), nvidia-visionworks (= 4.5.1-b17), nvidia-container (= 4.5.1-b17), nvidia-vpi (= 4.5.1-b17), nvidia-l4t-jetson-multimedia-api (>> 32.5-0), nvidia-l4t-jetson-multimedia-api (<< 32.6-0)
Homepage: Autonomous Machines | NVIDIA Developer
Priority: standard
Section: metapackages
Filename: pool/main/n/nvidia-jetpack/nvidia-jetpack_4.5.1-b17_arm64.deb
Size: 29390
SHA256: 13c10e9a53ec51c261ce188d626966dfca27f26b2ed94ba700147c1ba3e35399
SHA1: 81047a7779241bbf16763dbd1c4c12cf8c9d0496
MD5sum: 54916439514f39af5234b3a43e329910
Description: NVIDIA Jetpack Meta Package
Description-md5: ad1462289bdbc54909ae109d1d32c0a8

Package: nvidia-jetpack
Version: 4.5-b129
Architecture: arm64
Maintainer: NVIDIA Corporation
Installed-Size: 194
Depends: nvidia-cuda (= 4.5-b129), nvidia-opencv (= 4.5-b129), nvidia-cudnn8 (= 4.5-b129), nvidia-tensorrt (= 4.5-b129), nvidia-visionworks (= 4.5-b129), nvidia-container (= 4.5-b129), nvidia-vpi (= 4.5-b129), nvidia-l4t-jetson-multimedia-api (>> 32.5-0), nvidia-l4t-jetson-multimedia-api (<< 32.6-0)
Homepage: Autonomous Machines | NVIDIA Developer
Priority: standard
Section: metapackages
Filename: pool/main/n/nvidia-jetpack/nvidia-jetpack_4.5-b129_arm64.deb
Size: 29358
SHA256: 9ee354a66d932a3fbb244c926f333143a845c627c6981d108e01df2958ac462c
SHA1: 0e07f27c6fb9e34a70c69ae1150d1e578e938089
MD5sum: a551bbc8ff653c8983ce1804082bbcab
Description: NVIDIA Jetpack Meta Package
Description-md5: ad1462289bdbc54909ae109d1d32c0a8

Hi @blague12400, both JetPack 4.5 and JetPack 4.5.1 can use the r32.5.0 tag: nvcr.io/nvidia/dli/dli-nano-ai:v2.0.1-r32.5.0

thank you all for your precious help i managed to connect to jupyter lab i could finally start the AI one last question however as soon as i shut down the jupyter server to restart it without doing all the commands and without reloading all the packages what command should i type? or does the server start by itself at the same time as the jetson? thank you all again for your help

The Jupyter server starts when you start the container. You shouldn’t need to download packages, only need to start the container.