YOLOv8-OBB model in deepstream

• Hardware Platform NVIDIA Jetson Xavier NX
• DeepStream Version 6.1.1
• JetPack Version 5.0.2
• TensorRT Version 8.4.1.5

I’m currently working with the YOLOv8-obb(oriented bounding boxes) model in DeepStream SDK and facing an issue with the bounding boxes being plotted are not oriented. Could someone please advise on the necessary changes I need to make and specify the file(s) where these changes should be implemented?

I have used deepstream_python_apps and DeepStream-Yolo

Your assistance would be greatly appreciated!

Thank you.

Could you attach the image to illustrate this issue and your needs?

Hii!
I am using yolov8-obb model to run the detection. These model gives the bounding boxes with the orientation respective to the orientation of the object detected. Like if my object is placed with 45 degree angle it will draw the bounding boxes with angle(orientation) when we parse the results it have dedicated parameter for that for example rotation. The result is shown here obb in OBB section.


yolov8 result(reference)

yolov8-obb result(reference)

so the output I am getting is the first one but I want the boxes with their orientation like the 2nd image.

Our OSD cannot support draw the bbox with the orientation directly now. What outputs can you currently get through deepstream?

The output I am getting is the same as that of the shown in the first image.

No. I mean what outputs you can get from the model inference, like the coordinates data, the orientation data, etc… You can use the outputs to implement your needs by drawing lines.

image
This is the output of the model.
But can you help me with the result parser in deepstream. I can use these co-ordinates to plot the oriented boxes but for that where will I have to make change, that I am confused about.

Because there might be a little too much code to change. I suggest you can run the DeepStream pipeline with your yolov8-OBB model first, and then we can discuss the details that need to be modified. You can refer to our similar demo deepstream_yolo.

Yes I have already ran the application. I ran the test1 sample application with yolov8-obb model.
So basically as I previously mentioned that I have used deepstream python github code and this yolo config file DeepStream-Yolo.

OK. Then you need to implement your own postprocess function based on the output of your model, like nvdsinfer_custom_impl_Yolo.

After parsing the relevant output parameters, you need to draw 4 lines according to the output parameters because we cannot support draw the bbox with the orientation directly.

So in which file particularly I need to make changes to draw the line?
For parsing I will change the /nvdsinfer_custom_impl_Yolo/nvdsparsebbox_Yolo.cpp file but where will be the changes for drawing the boxs.

You can add a probe fuction to the src pad of nvinfer. Then add the lines to the display_meta, like display_meta.