Hello everyone,
I’m working on deploying a YOLOv8-based vision system on my Nvidia Jetson AGX Orin (JetPack 5.1.2, CUDA 11.4) and trying to convert my model to TensorRT for optimized inference. However, I’m facing issues where PyTorch does not detect the GPU.
System Setup:
-Jetson Model:** Jetson AGX Orin
-JetPack Version:** 5.1.2 (L4T 35.4.1)
-CUDA Version:** 11.4
-PyTorch Version:** 2.x (Installed from Nvidia’s repository)
-Torchvision Version:** 0.16.0
-TensorRT:** Installed via sudo apt install nvidia-tensorrt
-Python Version:** 3.8.10
-YOLOv8 Model:** weights.pt
(Attempting conversion to weights.engine
)
I installed PyTorch from Nvidia’s official repository, but torch.cuda.is_available()
returns False
, meaning my model is not running on the GPU. When I attempt to export my model to TensorRT with: “ValueError: Invalid CUDA ‘device=0’ requested. Use ‘device=cpu’ or pass valid CUDA device(s) if available.
torch.cuda.is_available(): False
torch.cuda.device_count(): 0”
I’ve tried multiple versions of Torch with no luck. I’m trying to advoid starting over and installing via Docker Container.
- Are there compatibility issues between PyTorch 2.x and JetPack 5.1.2 that I need to resolve?**
- Is there a known working method to install PyTorch with full GPU support on Jetson AGX Orin?**
- Would downgrading to an earlier PyTorch version help?**
I’d appreciate any insights or recommended fixes! Thanks in advance!