Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) NVIDIA GeForce RTX 3090 • DeepStream Version 7.0 • JetPack Version (valid for Jetson only) • TensorRT Version 8.6.1.6 • NVIDIA GPU Driver Version (valid for GPU only) 535.171.04 • Issue Type( questions, new requirements, bugs) Question • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
I have already used DeepStream SDK with the Yolov8 model and it works fine. Last week I noticed about the new Yolov8-OBB model that apart from giving the bounding box it also provides the orientation of it. For my project it’s ideal and I tried to apply it to DeepStream, however I am not able to obtain that orientation parameter. I have followed the same method that I used to implement the Yolov8 version (DeepStream-YOLO) .
Is it possible to implement this new model in DeepStream?
You need to implement the postprocess process according to your own model. You can try to modify the nvdsinfer_custom_impl_Yolo code to implement your needs.
Thanks! I suppose that I have to change the script that I attach, the nvdsparsebbox_Yolo.cpp (nvdsparsebbox_Yolo.cpp.txt (7.0 KB)).
In the config file, the parse-bbox-func-name uses the function that is in this script, NvDsInferParseYolo. However, I see that this function uses the NvDsInferParseObjectInfo structure. Can I modify this structure and add another parameter like b.rotation or it is fixed?
/**
* Holds information about one parsed object from a detector's output.
*/
typedef struct
{
/** Holds the ID of the class to which the object belongs. */
unsigned int classId;
/** Holds the horizontal offset of the bounding box shape for the object. */
float left;
/** Holds the vertical offset of the object's bounding box. */
float top;
/** Holds the width of the object's bounding box. */
float width;
/** Holds the height of the object's bounding box. */
float height;
/** Holds the object detection confidence level; must in the range
[0.0,1.0]. */
float detectionConfidence;
} NvDsInferObjectDetectionInfo;
I see, thank you! About the model, I would be very grateful if you could help me to integrate it, as I don’t have so much knowledge in these matters … in what format do you want to share it?
It’s mostly up to yourself to integrate it according to your own needs. Any problems encountered during the integration can be discussed on this topic.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks