Jetson Orin NX + JetPack 6.2: Best PyTorch Version & YOLOv5 Deployment Advice Needed

Hello,

I’m currently using a Jetson Orin NX 16GB module running JetPack 6.2, and I would like to deploy YOLOv5 on it with maximum inference speed and minimal latency.

I’ve noticed that some users reported compatibility issues between the latest Ultralytics library and JetPack 6.2. Therefore, I’m planning to use the original YOLOv5 GitHub implementation (v6.2 or earlier) and optimize it manually.

Could you please clarify the following:

  1. Which versions of PyTorch and TorchVision are officially compatible with JetPack 6.2 (CUDA 12.6.68, cuDNN 9.3, TensorRT 10.3)?
  • Should I use the NVIDIA-provided PyTorch wheels?
  • If yes, what is the correct installation method?
  1. What is the recommended approach to achieve the fastest inference on Jetson Orin NX?
  • Should I convert the YOLOv5 model to TensorRT?
  • Can I use FP16 or INT8 precision for better performance?
  • Are there any examples or best practices from NVIDIA for YOLOv5 deployment?
  1. Are there any known issues with the Ultralytics package on JetPack 6.2 that I should be aware of?

Any official guidance or resources would be greatly appreciated. My goal is to run YOLOv5 as efficiently as possible on Jetson Orin NX with JetPack 6.2.

Thank you in advance for your support!

Hi,

We have tested the latest Ultralytics library on JetPack 6.2 with YOLOv11 and it can work correctly.
So it’s expected that YOLOv5 also works as they are all Ultralytics sources.

You can find the setup info below:

The sample deploy model with TensorRT (format=engine).
FP16 and INT8 are also worthy to try for acceleration.

Thanks.

thanks for your reply,
1. Which versions of torch and torchvision are officially compatible with JetPack 6.2?
2. How can I convert the YOLOv5 .pt model to TensorRT (engine) format?

Hi,

You can check our NGC for the container that has supported Jetson (with igpu tag).
The container releases monthly and the PyTorch and TorchVision will update accordingly.

For the packages, you can find the prebuilt below:

You can convert the model into TensorRT.
In the link shared above, this can be done via the Ultralytics tool directly:
format=engine indicates exporting the model into a TensorRT engine.

$ yolo export model=yolo11n.pt format=engine

Thanks.

thanks for reply,
and is there any guide in order to compile open cv 4.8 cuda and python compatible?

Hi,

You can find our prebuilt OpenCV package in the same link as well:

To build OpenCV from the source, you can also try the script below:

Due to some compatibility issues related to CUDA, you will need to install OpenCV 4.10+ on JetPack 6.2:

Thanks.

yes,
I build OpenCV 4.12 but I have problem when I want to run .engine model of yolo v5 that shows incompatibility of cuda and tensorRT. any guid?

There is no update from you for a period, assuming this is not an issue anymore.
Hence, we are closing this topic. If need further support, please open a new one.
Thanks
~0507

Hi,

Could you share the output log/error with so we can know more about the issue.
Thanks.