Consultation on YOLOV8 model conversion related issues and config.inpfer_primary_YOLOV8.txt parameters

Question 1: I used the uploaded export_yolov8.py to convert the PT model to onnx, resulting in an onnx model with only one output header. Then, I used the uploaded config.inpfer_primary_YOLOV8.txt and deepstream configuration file to convert the engine model. I found a problem where the engine converted from config.inpfer_primary_YOLOV8.txt with network type=0 crashes when running, while the engine converted from config.inpfer_primary_YOLOV8.txt with network type=2 can run normally. Why is this?
Question 2: Where can I find out the meaning of each parameter in config.inpfer_primary_YOLOV8.txt? I couldn’t find any introduction to the network type parameter in the introduction file of the deepstrea-yolo folder.
upload.tar.gz (2.3 KB)

Hi @ncepuwuhan,

I think yo are not looking for help for NVIDIA related Android development, am I correct?

is this about Deepstream? Then you should check out the dedicated category for that.

If it is connected to some specific Jetson Hardware, please check the Jetson sub-categories for your specific Hardware.

Thanks!

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.