Can I execute yolov5 on the GPU of JETSON AGX XAVIER?

@k-hamada please remain respectful and courteous to other posters on the forums, whether they are other members of the community or NVIDIA staff. Everyone here is trying to help each other as best and quickly as possible.

For the ultralytics/yolov5, it appears you have gotten it running with PyTorch and torchvision after getting those installed. Installing those manually can be more complicated on ARM platform than it is on x86 since those packages are built from source as opposed to using upstream binaries provided by PyTorch, since PyTorch doesn’t release CUDA-enabled binaries for ARM. Our recommendation is to use the l4t-pytorch or l4t-ml containers since these come with PyTorch and torchvision already built, installed, and tested, which alleviates many of the above concerns.

Regarding the performance, what Aasta and Dane have been recommending is that if you want the optimized performance, is to run the YOLOv5 model with TensorRT (and/or DeepStream, which uses TensorRT underneath). Running the original PyTorch version will work, but you will get significant performance gains from using TensorRT for inference. There are a bunch of GitHub forks that run YOLOv5 with TensorRT instead of PyTorch (for example https://github.com/enazoe/yolo-tensorrt)

I’m closing this topic now since the original issue appears to have been addressed. If you encounter further issues, please feel free to open a new topic while considering the NVIDIA Developer Community Code of Conduct.

1 Like