Nvcr.io/nvidia/l4t-pytorch:r34.1.1-pth1.12-py3 run arenot available on jetson nano

Hi, I pulled the docker image nvcr.io/nvidia/l4t-pytorch:r32.6.1-pth1.8-py3, because I installed my Jetpack 4.6 at Jetson Nano. After the pulling, I tried to run that image and this just happened.

insung@insung-desktop:~/MotuS-ML$ sudo docker run -it --rm --runtime nvidia --network host nvcr.io/nvidia/l4t-pytorch:r34.1.1-pth1.12-py3
docker: Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: Running hook #0:: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: driver error: failed to process request: unknown.

I think the problem is that -runtime nvidia is a problem because we don’t have any errors without that command.

Hi,

r34/r25 doesn’t support Jetson Nano.
It looks like you are using a r34 container.
Please use the r32 container (ex. nvcr.io/nvidia/l4t-pytorch:r32.7.1-pth1.10-py3) instead.

Thanks.

1 Like

Thanks for your reply!

I reflashed the SD card and installed jetpack 4.6.1 on my jetson nano. I have additional questions…

  1. You must deploy AI services using Nvidia Jetson. Containers that run and use CUDA via a Docker image build a fastapi server. Is there a way to run it automatically without logging in?
  2. How do I find the correct Nvidia NGC version?

Hi,

You will need to use our container that supports the l4t system to use iGPU.

Usually, our container has the tag specified in the branch version.
For example, nvcr.io/nvidia/l4t-pytorch:r32.7.1-pth1.10-py3

Then compare to the OS returns by the following command:

$ cat /etc/nv_tegra_release

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.