DeepStream Sample App Meta Data (Solved)

Hello,

Finally have a question for you as we chug along the deepstream adventure.

That is, the sample app primary gie should generate a unique-id for the object detected. How can we export or find this meta data to pipe downstream for actionable metrics?

Also, how do we obtain the confidence score of the detected object?

Thank you in advance, as any direction/advice would be truly appreciated. Cheers.

Hi Jfcarp,
In this version, this interface(confidence) seems been not exposed.
But we have configure to set threshold, can this satisfy your requirements?

Thanks
wayne zhu

Good Morning Wayne,

Thank you for the response. Yes, I am aware of the threshold configuration item. We were simply hoping to get this with confidence as we tailor our models and configs over time. No worries, it is not super high-pri at the moment and more-than-likely can be figured out.

Regarding the other components mentioned:

How does one export or utilize the meta data DeepStream is said to present? Primarily the unique id’s or any other data/thumbnails that might be exportable for use downstream?

Any direction appreciated, even if its files within the code we have to adjust :)

Hi Jfcarp,
Are you requiring an interface to get meta data for each frame?

I am not clear with your case, could you give some detail about what you want to do?

Good Morning Sir,

I am referring to the DeepStream SDK description on the first website below. In particular, the excerpt: “Multiple output sinks included rendering to display, metadata logging to file, and saving to disk”. The rendering to display is fairly self explanatory, but how would be log the metadata to a file for use downstream or retrieve any of this data for use outside of rendering on display?

There is also a mention of a C++ API to integrate into existing workflows in the second link below. Aside from the HTTP documentation found with the deepstream, which mostly includes information on the sample app, are there any other documentation I missed that include information about this API and any other info that might help here?

I would like to better understand what is available for use before reinventing the wheel and coding out something that is already implemented, just not found in the docs. Appreciate the help and thank you in advance.

https://developer.nvidia.com/deepstream-jetson
https://developer.nvidia.com/deepstream-sdk

Hi jfcarp,

Let me summarize what deepstream SDK have:

  1. gstreamer plugins: including decode videoconvert nvinfer(which defined parameters for detection) render
  2. gstreamer APP: we have provide an APP which use above plugins to do card detect/ license plate detect
    In above solution, you can’t get data buffer, because all is in a plugins.
    You can refer to probe function on sink/src pad for achieving metadata.
    If you can leverage above plugins and app, then it will be good.

Otherwise:
You can use appsrc -> nvdec -> appsink to get video buffer, then use TensorRT to do inference, then do anything you want since, all buffer is controlled by your self.
BTW, nvinfer plugin is using TensorRT in behind, if some parameters is no exposed to you in nvinfer plugin, or meta data parameter’s miss match, you can choose this solution.

Thanks
wayne zhu

Thank you sir, that will help a lot. Much appreciated.