Saving Deepstream predictions labels into a text file

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson Tx2
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only) 4.5.1
• TensorRT Version 7.1.3
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi, i was just wondering if there is a way to save the predictions made by the model into a text file after running the exported model through deepstream application. Would appreciate if u can link me to certain discussion forums of the same topic (i tried searching but couldnt find any, im sorry ) or provide me some guidance on how to do it. Is best if could provide me the examples in python. Much thanks in advance :)

Can you elaborate what you want? Have you known something of deepstream?

Correct me if im wrong, but deepstream is an application that uses trained model to perform inference on a lifestream/video. So i was wondering if there is a way to get the inference outcome ( a text file containing all the predictions made) after running the deepstream application

DeepStream is a SDK but not an application. Quickstart Guide — DeepStream 6.1.1 Release documentation

The inference models are classfied into five types : detector, classifier, segmentation, instance segmentation and others. Different type models will output different results with different data structures(it is software programming, right?).

Deepstream provide APIs to output inference result, but whether the result is presented as text, picture, message or in other form is decided by the application. The application developer can decide and implement by himself according to his own requirement. There is no limitation from deepstream SDK.

Oh i see, maybe i wasnt clear on my end. So currently im doing object detection. i have already build and export a custom trained model (yoolov4, ssd etc) into deepstream. Now, im wondering if i could obtained the predictions output ( bbox coordinates) after running those exported models in deepstream. I currently using deepstream ver 5.1 and all of the codes are in python

bbox and other types output is exported by deepstream metadata MetaData in the DeepStream SDK — DeepStream 6.1.1 Release documentation.

You can get the object metadata and then " rect_params" in NvDsObjectMeta (NVIDIA DeepStream SDK API Reference: _NvDsObjectMeta Struct Reference) contains the bbox.

There are lots of sample codes in /opt/nvidia/deepstream/deepstream/sources/apps/ to show how to get metadata in application. Please learn the document and sample codes carefully.

BTW. It is important to learn gstreamer(https://gstreamer.freedesktop.org/) before you start with deepstream.

Sorry for the trouble but is that any python examples that can store the bounding box metadata into a file?

There is only samples for how to access metadata. You can convert the data into file by yourself.

deepstream_python_apps/deepstream_test_3.py at master · NVIDIA-AI-IOT/deepstream_python_apps (github.com)

1 Like

ok, Thank you so much for your help!

Hi i have read through the MetaData document that you have sent me. Correct me if im wrong, in order to get the bbox coordinates, i first have to get the batch meta using the pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer)) which contain the frame meta list. Then i have to get the Frame meta from the frame list using the pyds.NvDsFrameMeta.cast which contain the object list. Subsequently, i have to obtain the object meta from the object list using the pyds.NvDsObjectMeta.cast. However, im still not quite sure how to get the bbox coordinates from the object meta as i cant find any example from the deepstream_test_3.py that you have sent me.

Sorry, i have further read the document and realize that there is a way to get the coordinates and confidence level, just to double confirm
the obj_meta.rect_params.top represent ymin
the obj_meta.rect_params.left represent xmin
the obj_meta.rect_params.width represent width of bbox
the obj_meta.rect_params.height represent height of bbox
and the obj_meta.confidence represent confidence level am i right?

yes. NVIDIA DeepStream SDK API Reference: _NvOSD_RectParams Struct Reference

ok i will give it a try. Thank you so much for your guidance

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.