JetPack 6.0 – Missing PyTorch + CUDA Support for Jetson Inference (Need Official Wheel or Build Instructions)

Hi team,

I’m working on a production-quality computer vision demo for a client using a Jetson Orin Nano (JetPack 6.0 / L4T R36.4.3). The project involves a YOLOv5 model running in real-time via webcam, and while the pipeline is fully functional, we’re currently stuck using CPU inference.

Despite building a clean Docker container and confirming that OpenCV, Torch, and my detection scripts are working — torch.cuda.is_available() returns False, and there’s no CUDA-enabled PyTorch wheel available for JetPack 6.0 via pip or the NVIDIA Python Index.

I’ve also attempted:

  • Installing torch==2.2.0+nv24.03 with --extra-index-url=https://pypi.nvidia.com (fails: no matching version)
  • Manually searching for .whl builds in the forums and NGC catalog
  • Reading JetPack 6 early access threads for hints on how to build from source (no confirmed recipe yet)

💬 My questions:

  1. Is NVIDIA planning to publish a CUDA-enabled PyTorch .whl for JetPack 6.0 soon?
  2. Is there a reliable workaround or unofficial wheel that supports GPU inference under JetPack 6.0?
  3. Should we consider downgrading to JetPack 5.1 if we require GPU support for YOLOv5 now?

As a developer using Jetson hardware specifically for its GPU acceleration capabilities, being limited to CPU inference at the moment is a major blocker.

Any help or updates would be appreciated!

Thanks,
Carelia Rojas

Hi,

Just want to double-confirm your environment.
Do you use JetPack 6.2? As r36.4.3 comes from JetPack 6.2.

Thanks.

Hi,

Yes, I’m using JetPack 6.2, which includes L4T r36.4.3.

Let me know if you need any more details about the setup.

Thanks!

Hi,

As TensorRT 10.3 removes the Caffe/uff support, some models and examples in jetson-inference won’t work.
It’s recommended to use JetPack 6.0/TensorRT 8 to run the jetson-inference.

But for PyTorch, you can find a prebuilt package for JetPack 6.2 below:

For YOLO, you can check if Ultralytics source can meet your requirements.
Their example works well on the JetPack 6.2 environment.

Thanks.

Hi,

Thanks for the guidance. I followed your instructions and installed PyTorch using the jp6/cu126 index from pypi.jetson-ai-lab.dev, as recommended. However, after installation, I confirmed that:

  • torch.cuda.is_available() returns False

  • torch.cuda.get_device_name(0) raises: AssertionError: Torch not compiled with CUDA enabled

This suggests that the installed torch package is still a CPU-only build, even though I’m running JetPack 6.2 on a Jetson Orin Nano.

Could you please confirm:

  1. Whether the jp6/cu126 index currently includes a CUDA-enabled PyTorch build for Jetson Orin Nano with JetPack 6.2?

  2. If there are specific version pins (e.g., torch==2.x.x) I should use to get GPU support?

  3. Any known issues or workarounds if the GPU version fails to install correctly?

Thanks in advance for your support.

Best regards,
Carelia