Run YoloV8 with Jetson Inference on Jetson Nano

@aldhanekadev there isn’t built-in support for YOLO models in jetson-inference - you would need to load it with the proper input/output layer names (the error you are getting says it can’t find the input layer in your ONNX model that you specified). Also you would need to add the proper pre/post-processing that YOLO expects (currently, the detectNet class in jetson-inference is setup for SSD-Mobilenet models that are trained with train_ssd.py from the Hello AI World tutorial)

The reason that I don’t explicitly support the different YOLO models directly in jetson-inference is because the release cadence of new generations of YOLO models is high and it’s a lot for me to support all those different YOLO’s, along with their training scripts. Whereas SSD-Mobilenet is stable and runs with realtime inference performance across Jetson devices, and has stable training scripts in PyTorch (which are also able to be run across different Jetson’s).

BTW here are some open-source resources I found for running YOLOv8 inference with TensorRT: