• Hardware Platform (Jetson / GPU) Jetson Orin Nano
• DeepStream Version 6.3
• JetPack Version (valid for Jetson only) 5.1.5
• TensorRT Version 8.5.2
• NVIDIA GPU Driver Version (valid for GPU only) X
• Issue Type( questions, new requirements, bugs) questions
Hello,
I am currently attempting to convert a YOLO model to ONNX and apply it to the DeepStream SDK on the Jetson platform, following the official guidelines provided on the YOLO website.
(Ultralytics YOLO11 NVIDIA 에서 DeepStream SDK를 사용하는 Jetson 및 TensorRT)
While it works without issues on JetPack 6, I am encountering a problem with detection (bbox) recognition on JP5.1.5. Both YOLOv8 and YOLOv11 exhibit the same issue.
After conducting various tests, I found that the problem arises when using a CUDA-compatible version of PyTorch. Conversely, when using a non-CUDA-compatible version, the problem does not occur.
What I am looking for is a way to use a CUDA-compatible PyTorch on JP5.1.5 to convert the YOLO model to ONNX and apply it to the DeepStream SDK.
Could you please check if I might be missing anything ?
Thank you!
**Pytorch Version: ** 2.2.0
**Torchvision Version: ** 0.17.2
**Onnxruntime-gpu: ** 1.17.0