PyTorch ImportError: libcudnn.so.9 missing on JetPack 6.1 (L4T R36.4, Jetson Orin AGX device)

Environment details

Hardware: Jetson Orin AGX
OS: Ubuntu 22.04 (aarch64)
JetPack / L4T: 6.1 (R36.4)
CUDA: 12.6
cuDNN: Not found
Python: 3.10
PyTorch wheel used: torch-2.4.0+nv24.06-cp310-cp310-linux_aarch64.whl
TorchVision: torchvision-0.19.0+nv24.06-cp310-cp310-linux_aarch64.whl

Problem

PyTorch installation succeeds, but any import results in this error:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/nvidia/.local/lib/python3.10/site-packages/torch/__init__.py", line 238, in <module>
    from torch._C import *  # noqa: F403
ImportError: libcudnn.so.8: cannot open shared object file: No such file or directory

CUDA itself appears functional:

$ /usr/local/cuda/bin/nvcc --version
Cuda compilation tools, release 12.6, V12.6.77

$ /tmp/test_cuda
CUDA Device count=0

However, checking for cuDNN shows it’s completely missing:

$ ls -la /usr/lib/aarch64-linux-gnu/libcudnn*
ls: cannot access '/usr/lib/aarch64-linux-gnu/libcudnn*': No such file or directory

And there are no cuDNN-related packages listed:

$ dpkg -l | grep cudnn
(no output)


What I’ve tried

  1. Verified that /usr/local/cuda and environment paths are correctly set:

    export PATH=/usr/local/cuda/bin:$PATH
    export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
    
    
  2. Confirmed CUDA and driver installation are correct:

    • nvgpu module is loaded

    • /proc/driver/nvidia/version shows the correct L4T version

  3. Installed PyTorch Jetson wheel for JetPack 6.x (nv24.06) via pip

  4. Searched for cuDNN packages (apt search cudnn, apt search nvidia-l4t-cudnn) — none available

  5. Attempted to reinstall CUDA-related packages:

    sudo apt install --reinstall nvidia-l4t-cuda nvidia-l4t-cuda-utils nvidia-l4t-core
    
    

    → installs successfully, but no libcudnn.so files appear.


Questions

  1. How can I install the correct cuDNN 9 runtime on JetPack 6.1 / L4T R36.4?
    Is there an official .deb package (nvidia-l4t-cudnn or libcudnn9) available in the Jetson apt repo?

  2. Do the JetPack 6.x PyTorch wheels (e.g., torch-2.4.0+nv24.06) require cuDNN 9 specifically?
    If so, can you confirm the compatible JetPack release / cuDNN pairing?

  3. Is re-flashing or repairing with SDK Manager required to restore cuDNN (since it’s missing entirely on this BSP)?
    If yes, which exact components should be selected (CUDA, cuDNN, TensorRT, Jetson Runtime)?

  4. Could this issue be due to a BSP difference between the official NVIDIA JetPack and Nexcom’s customized JetPack image?
    (The device ships with a preconfigured system image from Nexcom.)


Additional context

  • Other CUDA samples (like deviceQuery) compile and run successfully.

  • /usr/lib/aarch64-linux-gnu/libcuda.so exists.

  • Only the cuDNN shared libraries are missing.

  • This causes both PyTorch and TensorRT to fail at runtime.


Goal

I want to:

  • Enable GPU acceleration in PyTorch and PyVision (torch.cuda.is_available() → True)

  • Ensure all CUDA/cuDNN/TensorRT libraries are correctly aligned with JetPack 6.1 (R36.4)

  • Avoid full reflash if possible (prefer apt-based or manual cuDNN installation)


Any guidance, official download link, or confirmation of the correct cuDNN version for JetPack 6.1 would be extremely helpful.

Hi,

Please try the package shared in the below link:

Thanks.

The link which you shared here is broken.

Hi,

Sorry for the incorrect message.
Please try jp6/cu126 index instead.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.