No GPU support in PyTorch for JetPack 6.2 (L4T R36.4) – missing .whl file

Hi NVIDIA team,

I’m using JetPack 6.2 (L4T R36.4.3) on a Jetson Orin Nano and trying to install PyTorch with GPU support. I noticed in other forum that there’s no official .whl available yet for JetPack 6.2 on the NVIDIA PyTorch repository.

I followed a community suggestion and ran the following command:

pip3 install torch==2.8.0 torchvision==0.23.0 torchaudio==2.8.0 --index-url https://pypi.jetson-ai-lab.dev/jp6/cu126

However, this resulted in a connection error and ultimately failed with:

WARNING: Retrying (Retry(total=4, ...): Failed to establish a new connection: [Errno 113] No route to host
...
ERROR: Could not find a version that satisfies the requirement torch==2.8.0 (from versions: none)
ERROR: No matching distribution found for torch==2.8.0

This suggests the index is either not public or not functional.


❗ The issue

JetPack 6.2 ships with CUDA 12.6 and cuDNN 8.9.5, but there is no official PyTorch GPU .whl for this version. This makes it hard to use JetPack 6.2 for real-time AI.


❓ My questions

  1. Will NVIDIA release official PyTorch .whl files with GPU support for JetPack 6.2 (L4T R36.4)?
  2. Is there an estimated timeline?
  3. Are there recommended alternatives or Docker images for this version?

📋 System Information

  • Device: Jetson Orin Nano
  • JetPack: 6.2
  • L4T: R36.4.3
  • CUDA: 12.6
  • cuDNN: 8.9.5
  • Python: 3.10
  • Ubuntu: 20.04

Thank you very much for your help.

Hi,

The URL has been changed.

Please try the package in the link below:

Thanks.

Thanks to the instructions shared, I was able to get PyTorch running with GPU support on my Jetson Orin Nano using JetPack 6.2 + CUDA 12.6.

Here’s what worked for me:

pip uninstall torch torchvision torchaudio -y
pip install --upgrade pip setuptools wheel

pip install torch==2.8.0 torchvision==0.23.0 torchaudio==2.8.0 \
--index-url https://pypi.jetson-ai-lab.io/jp6/cu126

Then I verified it with:

python -c "import torch; print(torch.__version__, torch.version.cuda, torch.cuda.is_available(), torch.cuda.get_device_name(0))"

Output:

2.8.0 12.6 True NVIDIA Orin