Environment details
Hardware: Jetson Orin AGX
OS: Ubuntu 22.04 (aarch64)
JetPack / L4T: 6.1 (R36.4)
CUDA: 12.6
cuDNN: Not found
Python: 3.10
PyTorch wheel used: torch-2.4.0+nv24.06-cp310-cp310-linux_aarch64.whl
TorchVision: torchvision-0.19.0+nv24.06-cp310-cp310-linux_aarch64.whl
Problem
PyTorch installation succeeds, but any import results in this error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/nvidia/.local/lib/python3.10/site-packages/torch/__init__.py", line 238, in <module>
from torch._C import * # noqa: F403
ImportError: libcudnn.so.8: cannot open shared object file: No such file or directory
CUDA itself appears functional:
$ /usr/local/cuda/bin/nvcc --version
Cuda compilation tools, release 12.6, V12.6.77
$ /tmp/test_cuda
CUDA Device count=0
However, checking for cuDNN shows it’s completely missing:
$ ls -la /usr/lib/aarch64-linux-gnu/libcudnn*
ls: cannot access '/usr/lib/aarch64-linux-gnu/libcudnn*': No such file or directory
And there are no cuDNN-related packages listed:
$ dpkg -l | grep cudnn
(no output)
What I’ve tried
-
Verified that
/usr/local/cudaand environment paths are correctly set:export PATH=/usr/local/cuda/bin:$PATH export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH -
Confirmed CUDA and driver installation are correct:
-
nvgpumodule is loaded -
/proc/driver/nvidia/versionshows the correct L4T version
-
-
Installed PyTorch Jetson wheel for JetPack 6.x (
nv24.06) via pip -
Searched for cuDNN packages (
apt search cudnn,apt search nvidia-l4t-cudnn) — none available -
Attempted to reinstall CUDA-related packages:
sudo apt install --reinstall nvidia-l4t-cuda nvidia-l4t-cuda-utils nvidia-l4t-core→ installs successfully, but no
libcudnn.sofiles appear.
Questions
-
How can I install the correct cuDNN 9 runtime on JetPack 6.1 / L4T R36.4?
Is there an official.debpackage (nvidia-l4t-cudnnorlibcudnn9) available in the Jetson apt repo? -
Do the JetPack 6.x PyTorch wheels (e.g.,
torch-2.4.0+nv24.06) require cuDNN 9 specifically?
If so, can you confirm the compatible JetPack release / cuDNN pairing? -
Is re-flashing or repairing with SDK Manager required to restore cuDNN (since it’s missing entirely on this BSP)?
If yes, which exact components should be selected (CUDA, cuDNN, TensorRT, Jetson Runtime)? -
Could this issue be due to a BSP difference between the official NVIDIA JetPack and Nexcom’s customized JetPack image?
(The device ships with a preconfigured system image from Nexcom.)
Additional context
-
Other CUDA samples (like
deviceQuery) compile and run successfully. -
/usr/lib/aarch64-linux-gnu/libcuda.soexists. -
Only the cuDNN shared libraries are missing.
-
This causes both PyTorch and TensorRT to fail at runtime.
Goal
I want to:
-
Enable GPU acceleration in PyTorch and PyVision (
torch.cuda.is_available() → True) -
Ensure all CUDA/cuDNN/TensorRT libraries are correctly aligned with JetPack 6.1 (R36.4)
-
Avoid full reflash if possible (prefer apt-based or manual cuDNN installation)
Any guidance, official download link, or confirmation of the correct cuDNN version for JetPack 6.1 would be extremely helpful.