Is there any cpp parser for onnx yolov8?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.4
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

i want to integrate yolov8 into my app, but i am unable to find a parser for it.
yolov8m.onnx

input:
name: images

tensor: float32[1,3,480,640]

output:
name: output0

tensor: float32[1,84,6300]

Following python code works well with my model but now i want a cpp parser for deepstream

Please i need help.

There is no sample now. maybe you can google for the 3rd party samples.

Hi nz97,

Please refer to this GitHub repository: GitHub - marcoslucianops/DeepStream-Yolo: NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models

Don’t hesitate to ask questions further if you have any.

Do you know about any?

The problem in this repo is that its parser expect output in different format i.e. separate output layers for boxes, scores, classes
however, my model has a single output0 layer.
I am not a cpp dev, so i am facing alot of issues in making that parser work.

Any postprocessing implementation is decided by the corresponding model but not DeepStream SDK. You need to develop the postprocessing by yourself.

In export_yoloV8.py, an additional part is added to the network to parse the YOLO output into boxes, scores, and classes.

Basically, you need to train a yolov8 model or use a pretrained checkpoint, and run export_yoloV8.py to export onnx. Then you will be able to use it in deepstream. Deepstream will compile it to TensorRT engine in the first run and you can use engine file for the future deepstream runs.

So you saying that i cant work with this onnx, i will have to train and re-export to get it in specific format?

I tried a yolo with three output layers with the parser you mentioned, but the problems were :

  1. Accuracies were way way too low. Basically, no probability was above 0.1
  2. Bounding boxes were not fitting/tracking the moving objects

I’m actively using that approach in my projects and I saw no issue in terms of detections so I recommend reviewing your steps and making sure everything is done properly.

If you have only the .onnx file for your network and not the weights file, you need to use an onnx parser. I don’t have any experience with that, but I can give you an example of a similar approach.

Please refer to this example to parse onnx and make changes:

Or you can implement a custom bbox parser for deepstream.

i’ll look into, thankyou very much for your time