Hi team,
I’m working on a production-quality computer vision demo for a client using a Jetson Orin Nano (JetPack 6.0 / L4T R36.4.3). The project involves a YOLOv5 model running in real-time via webcam, and while the pipeline is fully functional, we’re currently stuck using CPU inference.
Despite building a clean Docker container and confirming that OpenCV, Torch, and my detection scripts are working — torch.cuda.is_available()
returns False
, and there’s no CUDA-enabled PyTorch wheel available for JetPack 6.0 via pip or the NVIDIA Python Index.
I’ve also attempted:
- Installing
torch==2.2.0+nv24.03
with--extra-index-url=https://pypi.nvidia.com
(fails: no matching version) - Manually searching for
.whl
builds in the forums and NGC catalog - Reading JetPack 6 early access threads for hints on how to build from source (no confirmed recipe yet)
💬 My questions:
- Is NVIDIA planning to publish a CUDA-enabled PyTorch
.whl
for JetPack 6.0 soon? - Is there a reliable workaround or unofficial wheel that supports GPU inference under JetPack 6.0?
- Should we consider downgrading to JetPack 5.1 if we require GPU support for YOLOv5 now?
As a developer using Jetson hardware specifically for its GPU acceleration capabilities, being limited to CPU inference at the moment is a major blocker.
Any help or updates would be appreciated!
Thanks,
Carelia Rojas