Hi everyboy,
I want to launch a docker container from nvcr.io/nvidia/l4t-pytorch:r34.1.1-pth1.12-py3
with privileged flag as follows:
docker run -it --runtime nvidia --privileged --network host --name tt --user $(id -u):$(id -g) nvcr.io/nvidia/l4t-pytorch:r34.1.1-pth1.12-py3
However, I get the following error:
stderr: nvidia-container-cli: mount error: file creation failed: /sd_card/docker/overlay2/2ec7d73d027a4924d3bd1845a37b0fbf4510851b566c919e7e50e3405444c794/merged/dev/nvhost-as-gpu: invalid argument: unknown.
ERRO[0000] error waiting for container: context canceled
Any ideas why this happens?
This is my Orin config (in case this helps):
NVIDIA NVIDIA Orin Jetson-Small Developer Kit
L4T 34.1.1 [ JetPack 5.0.1 DP ]
Ubuntu 20.04.5 LTS
Kernel Version: 5.10.65-tegra
CUDA NOT_INSTALLED
CUDA Architecture: 8.7
OpenCV version: 4.5.4
OpenCV Cuda: NO
CUDNN: 8.3.2.49
TensorRT: 8.4.0.11
Vision Works: NOT_INSTALLED
VPI: 2.0.14
Vulcan: 1.3.203
nvidia-container-cli --version
cli-version: 1.9.0
lib-version: 0.11.0+jetpack
build date: 2022-03-18T13:49+00:00
build revision: 5e135c17d6dbae861ec343e9a8d3a0d2af758a4f
build compiler: aarch64-linux-gnu-gcc-7 7.5.0
build platform: aarch64
build flags: -D_GNU_SOURCE -D_FORTIFY_SOURCE=2 -DNDEBUG -std=gnu11 -O2 -g -fdata-sections -ffunction-sections -fplan9-extensions -fstack-protector -fno-strict-aliasing -fvisibility=hidden -Wall -Wextra -Wcast-align -Wpointer-arith -Wmissing-prototypes -Wnonnull -Wwrite-strings -Wlogical-op -Wformat=2 -Wmissing-format-attribute -Winit-self -Wshadow -Wstrict-prototypes -Wunreachable-code -Wconversion -Wsign-conversion -Wno-unknown-warning-option -Wno-format-extra-args -Wno-gnu-alignof-expression -Wl,-zrelro -Wl,-znow -Wl,-zdefs -Wl,--gc-sections
docker --version
Docker version 20.10.12, build 20.10.12-0ubuntu2~20.04.1
Hi @jnaranjo , can you refer to this post?
Hi, all
We found a way to run --privileged and --runtime nvidia together.
Please edit the following file and comment out the /dev/nvhost-as-gpu.
/etc/nvidia-container-runtime/host-files-for-container.d/l4t.csv
dev, /dev/fb0
dev, /dev/fb1
#dev, /dev/nvhost-as-gpu
dev, /dev/nvhost-ctrl
...
Although the node is commented out, you can still access it within the docker.
We tested a CUDA sample and it can run normally.
Please note that if you run the container without --privileged, the full l4t…
Hi, I folllowed
$ sudo apt install curl
$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
&& curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/experimental/$distribution/libnvidia-container.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
$ sudo apt update
$ sudo apt-get install nvidia-container-toolkit=1.10.0~rc.3-1
but I get E: Version '1.10.0~rc.3-1' for 'nvidia-container-toolkit' was not found
Version 1.10.0 cannot be found either.
Should I just launch sudo apt-get install --upgrade nvidia-container-toolkit
?
jnaranjo:
but I get E: Version '1.10.0~rc.3-1' for 'nvidia-container-toolkit' was not found
Version 1.10.0 cannot be found either.
Should I just launch sudo apt-get install --upgrade nvidia-container-toolkit
?
Hi @jnaranjo , presumably that package version has moved out of experimental/RC since @AastaLLL ’s post. You could try running apt-cache madison nvidia-container-toolkit
to show the available versions that can be installed, or use this workaround until the next JetPack is released.
BTW this was from JetPack 5.0.2 / L4T R35.1.0:
$ apt-cache madison nvidia-container-toolkit
nvidia-container-toolkit | 1.11.0~rc.1-1 | https://repo.download.nvidia.com/jetson/common r35.1/main arm64 Packages
So it may be that upgrading your JetPack from JetPack 5.0.1 DP (L4T R34.1.1) to JetPack 5.0.2 (L4T R35.1) is helpful.
1 Like
system
Closed
February 7, 2023, 8:52am
5
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.