Consultation on YOLOV8 model conversion related issues and config.inpfer_primary_YOLOV8.txt parameters

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) jetson
• DeepStream Version 6.3
• JetPack Version (valid for Jetson only) 5.1.3
• TensorRT Version 8.4
Question 1: I used the uploaded export_yolov8.py to convert the PT model to onnx, resulting in an onnx model with only one output header. Then, I used the uploaded config.inpfer_primary_YOLOV8.txt and deepstream configuration file to convert the engine model. I found a problem where the engine converted from config.inpfer_primary_YOLOV8.txt with network type=0 crashes when running, while the engine converted from config.inpfer_primary_YOLOV8.txt with network type=2 can run normally. Why is this?
Question 2: Where can I find out the meaning of each parameter in config.inpfer_primary_YOLOV8.txt? I couldn’t find any introduction to the network type parameter in the introduction file of the deepstrea-yolo folder.
upload.tar.gz (2.3 KB)

The postprocessing algorithms of classifier and detector are different. Please check with your postprocessing implementation.

Please refer to Gst-nvinfer — DeepStream documentation and DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

I promise it’s a detection model,but use the latest deepstream-yolo convert file(export_yolov8.py), the onnx model only have one output header,should I use the convert file that not the latest version?

The postprocessing I mentioned in this topic is the postprocessing which parse the model’s output and calculate the bboxes from the model’s output. The postprocessing should be aligned to the model you use.

hi,
What is your final target? export an yolov8 onnx and run with yolo_deepstream ?

thanks

Okay, I understand what you mean. How should I configure the config_infer_primary_yoloV8.txt file for post-processing if the output header of the model is one?

i use the deepstream-yolo proj to generate onnx,and use the

deepstream-app -c config_primary_yolov8.txt

tool to generate the engine format that can run detection analysis on jetson device.i use the above upload file to generate onnx and engine but when I run it’s acount a segmentation error.

  1. you must generate a correct onnx model
    here is how I export the onnx with python:
from ultralytics import YOLO
model = YOLO("yolov8s.yaml")
path = model.export(format="onnx", imgsz=(640, 640))  # return path to exported model
  1. yolov8’s output format is different with yolov7, so you must rewrite the cuda function “NvDsInferParseYoloCuda” to adapt the yolov8’s post processing. Without that you can not got correct result with this repo

You’ve already configured your postprocessing function “NvDsInferParseYolo” with “network-type=0” in the configuration file. This postprocessing function caused the crash. So you need to debug your postprocessing function “NvDsInferParseYolo”, it may not match to your model.

ok, thank you so…much for your reply ! I don’t know if I can understand it this way. Network type parameters equal to 0, 1, and 2 do not necessarily correspond exactly to the model type. Previously, I saw that network type=0 corresponds to detection, and equal to 2 represents classification. In fact, the detection model can also be set to network type=2, right?

"network-type‘ only impact which postprocessing will be trigerred to handle the model’s outputs.

No. It can’t, if you trigger classifier postprocessing with your detection model, you will not get the correct bbox.

gst-nvinfer is open source, please refer to the source code. There is a graph for the gst-nvinfer work flow for you to understand the source code. DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.