Yolov8 working on Jetson AGX Orin and Orin Nano with TensorRT and CUDA

Continuing the discussion from PyTorch for Jetson:

@f.mainstone I have it all working including TensorRT. I didn’t mess with CUDA at all, it already had CUDA 11.4 which worked. I then did everything the same for a Jetson Orin Nano and it worked again. Here are my notes I took while setting it up:


Using TensorRT with Yolov8 on the Jetson AGX Orin with nvidia-jetpack 5.1.2-b104

Tested on Python 3.8.10. Should work on 3.11>=Python>=3.8

Torch will NOT be CUDA compatible if installed by pip.
Install torch>=2.0.0 from build wheel (See PyTorch for Jetson for aarch64 wheel)
Install torchvision>=0.15.1 from source. Version should match the torch version. See above link for matching versions.

Install cmake>= 3.22. (apt will not work, views 3.16 as latest. Necessary for onnxsim)

Install the latest versions of below, tested versions are listed
ultralytics==8.0.210
onnx==1.15.0
onnxruntime-gpu==1.16.0
onnxsim==0.4.33
tensorrt==8.5.2.2

For complete list of working versions we also have:
torch==2.0.0+nv23.5
torchvision==0.15.1
Cuda compilation tools, release 11.4, V11.4.315
cmake version 3.28.0-rc5

Wow.

Thank you so much for such a detailed and generous explanation!

I’ll give this all a go right away.

Have a lovely day 😊

And for jetsons onnxruntime-gpu needs to be built from wheel obtained from here Jetson Zoo - eLinux.org

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.