Hi NVIDIA team,
I’m using JetPack 6.2 (L4T R36.4.3) on a Jetson Orin Nano and trying to install PyTorch with GPU support. I noticed in other forum that there’s no official .whl
available yet for JetPack 6.2 on the NVIDIA PyTorch repository.
I followed a community suggestion and ran the following command:
pip3 install torch==2.8.0 torchvision==0.23.0 torchaudio==2.8.0 --index-url https://pypi.jetson-ai-lab.dev/jp6/cu126
However, this resulted in a connection error and ultimately failed with:
WARNING: Retrying (Retry(total=4, ...): Failed to establish a new connection: [Errno 113] No route to host
...
ERROR: Could not find a version that satisfies the requirement torch==2.8.0 (from versions: none)
ERROR: No matching distribution found for torch==2.8.0
This suggests the index is either not public or not functional.
❗ The issue
JetPack 6.2 ships with CUDA 12.6 and cuDNN 8.9.5, but there is no official PyTorch GPU .whl for this version. This makes it hard to use JetPack 6.2 for real-time AI.
❓ My questions
- Will NVIDIA release official PyTorch .whl files with GPU support for JetPack 6.2 (L4T R36.4)?
- Is there an estimated timeline?
- Are there recommended alternatives or Docker images for this version?
📋 System Information
- Device: Jetson Orin Nano
- JetPack: 6.2
- L4T: R36.4.3
- CUDA: 12.6
- cuDNN: 8.9.5
- Python: 3.10
- Ubuntu: 20.04
Thank you very much for your help.