Hi everyone,
I’m trying to set up a theft detection system on my Jetson Orin Nano, but I’m having a persistent issue installing a CUDA-enabled version of PyTorch. I have tried many methods and always run into problems.
Here are the details of my setup and what I’ve tried.
My Configuration
-
Edge AI Computer: NVIDIA Jetson Orin Nano (8 GB)
-
Operating System: NVIDIA L4T 36.4.4 (JetPack 6.2)
-
Installed Libraries: CUDA 12.6, TensorRT 10.3
-
System Libraries:
libcudnn.so.9is present on the system.
The Problem
I need to install a CUDA-enabled version of PyTorch and Torchvision that is compatible with JetPack 6.2 and libcudnn.so.9.
All my installation attempts result in one of the following errors:
-
ImportError: libcudnn.so.8: PyTorch cannot find the correct cuDNN library, as it requires version 8, but my system has version 9. -
CUDA available: false: After successful installation of a.whlfile, the PyTorch test script shows no CUDA support, indicating a CPU-only version was installed.
Troubleshooting Steps Taken
I have attempted the following methods, all of which have failed:
-
pip3 installfrom direct links: Every link I found (from NVIDIA forums and guides) resulted in a “404 Not Found” or “410 Gone” error. -
pip3 install --extra-index-url: This command successfully installed PyTorch, but it was the wrong version (e.g.,2.8.0+cpu), which does not support the GPU. -
Manual
.whlfile installation: I downloaded files on a different computer and transferred them. This led to dependency conflicts with PyTorch and Torchvision, or the installed PyTorch version still did not support CUDA. -
docker run: Trying to use official NVIDIA containers resulted in “manifest not found” errors because the Docker image tags are constantly changing and the provided tags were no longer valid.
I am at a loss for a solution. Can someone please provide a direct, working method to install the correct packages? Any help would be greatly appreciated.
Thx butch