Now I want to parse the output of a binary classifier, add the metadata in the DeepStream format so it can handle and perform the analytics or send it to a database.
I have the confidence, class id, and string label from the classifier.
So my question is, where do I store this information?
NvDsObjectMeta
NvDsClassifierMeta
…?
Could you hand me a code snippet for this?
And for the future, having the data parsed in the correct format, where do I store:
Keypoints from pose estimation.
Segmentation mask.
I want DeepStream to handle automatically the drawing and test how much I can automatize.
Thanks!!!
With the link, I was able to find other structures, but still isn’t quite clear to me.
Suppose I want DeepStream to only classify images, let’s forget about object detection, just classifier. Display the name of the class on screen, and if possible, send it to a database connection as metadata.
What is the “DeepStream way” to do that?
I don’t understand in the hierarchy, where I should store this information for a classifier output:
ClassID
Label string
Confidence
There exists a nvds_add_obj_meta_to_frame that adds data for object detection in the correct way, and later DeepStream automatically creates display drawings for objects. I have tested it from examples, and works great.
How do you do that with classifier data? Can I add a NvDsClassifierMeta to the batch_meta? Does that make sense?
I have found nvds_add_obj_meta_to_frame, nvds_add_classifier_meta_to_object, nvds_add_classifier_meta_to_roi, but I don’t understand in which case I have to use them or combine them.
After writing this, I’m thinking that the only way I see is to create a dummy object meta data for the entire image, and there add my classifier output data. It’s kind of a brute method, but it may work.
I will wait for your answer if you have a better solution or the proper way to do this.
@kesong
Hello again!!
To give a better picture, I will compare two projects cases from Nvidia.
First, the SSD parser demo deepstream-ssd-parser
In this case, the post-processing algorithm grabs the tensors, extracts the boxes, scores, classes, etc, stores them in NvDsObjectMeta, and when you do that, display OSD is automatically drawn by deepstream. I think you can even enable the object tracking plugin. And from my understanding, if you add a msgbroker plugin at the end, it will consume this information correctly from the batch_metadata hierarchy.
Now, the pose estimation demo deepstream_pose_estimation.
In this case, they do post-processing of the output, run some algorithms, and at the end, they have a list of lines related to keypoints from pose, added to display metadata. In that way, the OSD plugin will draw the info and display it. That’s OK. But, what happens if I add a msgbroker plugin, will it consume the display metadata? My understanding is no.
So, what is the Nvidia recommendation to approach these cases?
What is the best way after custom processing of the tensors, to add the info to the pool of metadata and have it available for display, message broker, etc? Because that is DeepStream about, Analytics.