Run YoloV8 with Jetson Inference on Jetson Nano

So I have object detection .onnx file model for my project, it was converted from Yolov8 model (.pth) to .onnx, however when I run the converted Yolov8 model (the onnx one) with Jetson Inference detectnet it fails :

The reason why I want to run the onnx model with jetson inference because if I didnt run it with jetson inference the fps become 2 fps.
Is it possible to run Yolov8 model (onnx, converted model) with Jetson Inference on Jetson Nano?

Hi,

Could you check if your model can run with TensorRT first?

$ /usr/src/tensorrt/bin/trtexec --onnx=<file>

Thanks.

@aldhanekadev there isn’t built-in support for YOLO models in jetson-inference - you would need to load it with the proper input/output layer names (the error you are getting says it can’t find the input layer in your ONNX model that you specified). Also you would need to add the proper pre/post-processing that YOLO expects (currently, the detectNet class in jetson-inference is setup for SSD-Mobilenet models that are trained with train_ssd.py from the Hello AI World tutorial)

The reason that I don’t explicitly support the different YOLO models directly in jetson-inference is because the release cadence of new generations of YOLO models is high and it’s a lot for me to support all those different YOLO’s, along with their training scripts. Whereas SSD-Mobilenet is stable and runs with realtime inference performance across Jetson devices, and has stable training scripts in PyTorch (which are also able to be run across different Jetson’s).

BTW here are some open-source resources I found for running YOLOv8 inference with TensorRT:

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.