Orin Nano (L4T R36.4.3) Docker Pull Fails: "manifest unknown" on nvcr.io

Subject/Title: Orin Nano (L4T R36.4.3) Docker Pull Fails: “manifest unknown” on nvcr.io

Body:

Hello,

I’m trying to pull a container from nvcr.io on a fresh installation of JetPack 6.x on my Jetson Orin Nano, but I’m consistently running into a “manifest unknown” error. I would greatly appreciate any guidance.

Here is a detailed summary of my environment and all the troubleshooting steps I’ve already taken:

1. System Environment:

  • Device: NVIDIA Jetson Orin Nano Developer Kit
  • L4T Version: R36.4.3 (from /etc/nv_tegra_release) (JetPack 6.x)
  • Setup: Booting from a microSD card with a completely fresh flash of the official JetPack 6.x SD card image. The NVMe drive is installed but has not been mounted or configured yet for these tests.
  • Python Version: 3.10.12
  • CUDA Toolkit: 12.6 (V12.6.68 from nvcc --version)
  • cuDNN Version: 9.3.0.75 (for CUDA 12.6, from dpkg -l | grep libcudnn)

2. The Specific Problem: When I try to pull the official L4T PyTorch container from nvcr.io, the command fails.

  • Command Used: Bashdocker pull nvcr.io/nvidia/l4t-pytorch:r36.3.0-torch2.2-py3
  • Error Message: Error response from daemon: manifest for nvcr.io/nvidia/l4t-pytorch:r36.3.0-torch2.2-py3 not found: manifest unknown: manifest unknown.

3. What I’ve Already Ruled Out:

  • Docker Installation: The Docker daemon is running correctly. docker run hello-world (from the default Docker Hub) works perfectly. My user is in the docker group.
  • NGC Authentication: I have successfully logged in to NVIDIA’s registry using an API key via docker login nvcr.io. The command reports Login Succeeded. The error persists after a successful login.
  • Specific Container Tag: The error is not specific to one tag. I have tried both r36.3.0-torch2.2-py3 and the older r36.2.0-torch2.1-py3 tag, and both fail with the same “manifest unknown” error.
  • Network Firewall/Proxy: This issue occurs on three completely different networks: my office LAN, a Verizon mobile hotspot, and my home Comcast Wi-Fi. This strongly suggests it is not a network-specific firewall issue.
  • DNS Issues: While connected to my home Comcast network, I manually changed the Orin Nano’s DNS servers to Google (8.8.8.8) and Cloudflare (1.1.1.1). The error still persists, making a simple DNS resolution problem unlikely.
  • Related Access Issues: I have also experienced similar network access problems when trying to download source code ZIPs from codeload.github.com (getting 404 errors on valid links) and when trying to git clone public repositories (being incorrectly prompted for authentication). This suggests there might be a common, underlying issue on my Orin Nano’s OS or network stack that affects access to several developer services.

My Core Question: Given that this “manifest unknown” error for nvcr.io occurs on a completely clean, freshly flashed JetPack 6.x system, across multiple networks, and after ruling out standard DNS and authentication issues, what could be the root cause? Is this a known issue with L4T R36.4.3, or is there a deeper system-level configuration or fix required to allow this device to correctly access and pull from NVIDIA’s container registry?

Any help or diagnostic steps would be greatly appreciated.

Thank you!

Hello,

I have an important update on this issue based on further troubleshooting.

Following advice, I started with the completely fresh flash of JetPack 6.x (L4T R36.4.3) and performed a full system update using sudo apt update && sudo apt full-upgrade, and then rebooted.

The good news is that this did fix some of my other networking issues. For example, sudo snap install chromium, which previously failed because it “could not connect to snap”, now works correctly. This confirms the system updates resolved some underlying connectivity problems.

However, the original problem with Docker still persists. After the system update and a successful docker login nvcr.io, the pull command still fails with the same error.

As a final diagnostic, I watched the live Docker daemon logs using sudo journalctl -u docker -f while attempting the pull. At the moment of failure, the daemon itself logs the following error:

level=error msg="Not continuing with pull after error" error="manifest unknown: manifest unknown"

This confirms the daemon is receiving this “manifest unknown” error directly from the registry and not generating a more specific underlying network or SSL error locally. The full daemon startup log also showed this warning: CDI setup error /var/run/cdi: failed to monitor for changes: no such file or directory.

To summarize, even on a fully updated fresh OS installation where other services like snap now work, and across three different networks (office, Verizon, Comcast) with public DNS configured, my Orin Nano cannot pull any l4t-* container from nvcr.io.

This seems to isolate the problem to a very specific and stubborn issue between the Docker daemon on L4T R36.4.3 and the nvcr.io registry. I am now completely blocked.

Any help or further diagnostic steps from NVIDIA’s engineers would be greatly appreciated.

Thank you.

Hello,

A quick follow-up with another diagnostic data point for this issue:

I’ve confirmed that while my Orin Nano can browse the web, it seems to have a specific issue with command-line tools accessing certain backend servers.

  • What Fails: wget commands for direct ZIP downloads of source code from GitHub (which redirect to codeload.github.com) consistently fail with a 404 Not Found error. This is consistent with the docker pull command failing with a “manifest unknown” error from nvcr.io.
  • What Succeeds: However, I was just able to successfully download a repository ZIP file by navigating to its main page on GitHub using the Chromium web browser on the Orin Nano and clicking the “Download ZIP” button.

This new information seems to confirm that the issue is not a total network failure, but a very specific problem related to how command-line tools (docker, wget, git) are interacting with the network stack on this fresh L4T R36.4.3 installation.

I hope this additional detail helps narrow down the potential cause.

Thank you.

Hi,

The container doesn’t exist.
The latest tag for l4t-pytorch is up to r35.2.1-pth2.0-py3:

From JetPack 6, the Jetson Pytorch container is moved to the link below:

Please find one with igpu tag for the Jetson device.
For example nvcr.io/nvidia/pytorch:25.05-py3-igpu

Thanks.

Hello,

Thank you to the community and the NVIDIA team for the help. I have an update and a solution.

The Solution: The “manifest unknown” error was resolved by following the advice from NVIDIA staff member @[insert staff member’s username here, e.g., @AastaLLL] on another thread. The issue was that for JetPack 6.x, the container repository has changed.

The correct command to pull a compatible container is: docker run ... nvcr.io/nvidia/pytorch:25.05-py3-igpu

Using this new container name, the docker pull and docker run commands worked perfectly. The underlying network access issue I was seeing with wget, git, and the old nvcr.io links seems to be related to my specific environment’s inability to access those older resources, but using the new, correct container path works.

Summary for others: If you are on JetPack 6.x (L4T R36.x) and want the official PyTorch container, do NOT use the old l4t-pytorch tags. You must use the new pytorch repository with an -igpu tag, like nvcr.io/nvidia/pytorch:25.05-py3-igpu.

After successfully pulling and running this container, I was able to pip3 install ultralytics, and it correctly identified the pre-installed GPU versions of PyTorch and Torchvision. My development environment is now working.

Thank you again for the pointers that led to the solution!


This response clearly explains the solution, thanks the person who provided it, and will be incredibly helpful for any other users who run into the same “manifest unknown” error with the old container names on JetPack 6.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.