How to get the full list of obj_meta data available

I am modifying the example in the python examples of the deepstream 5.0 SDK.

I want to understand all the information available for external use like object_id (tracker), class_id (pgie), etc.

I know the data is pulled out into obj_meta from osd_sink_pad_buffer_probe and have successfully also extracted bbox data using obj_meta.rect_params.left/top/width/height.

But I have not found any documentation or ability to extract sgie1,2,3 data or to know all data available in pgie or tracker that I can pull out.

I want to be able to use this data to send to a DB and also for further external analytics.

I’m using deepstream 5.0 SDK on a Jetson Nano (the new one with 2 CSI ports).

Probing on osd sink pad will get pgie, sgie, tracker information.
from this graph, you can get a whole understanding about how metadata structs.
about sending the data to DB, you may use msgbroker for this, here is documentaion, you could read through it.

Thanks @amycao for your explanation.

For linking to the database, I have already done the links and implementation in python, though your links are very interesting.

My main problem remains how to get a list of data available in the obj_meta. I’m sure it is a matter of understanding the data structure (and getting the right terminology). Once python does the following;


is obj_meta a dict, list, array or something else ??

I want to get a list of names of variables from the extracted metadata from the pipeline so that I can decide what data is useful. This list should be like;


I have read your links before, but I do find it confusing and not fully clear. I understand that the metadata which is compiled when running through the pipeline holds the data in the NvDsObjectMeta and the child NvDsClassifierMeta and is cast out to pyds to ensure C code remains the owner of the memory. But I get stuck here.

Is there a simple python loop or commands from the API to print a list of the names of the variable to the CLI ?

Thanks for your support.

I’m not sure how complete this documentation is, but I think it’s the best we’ve got on the python side.

Thank you These are very good links and I did find them a few days ago. Between you and @amycao you have been great in showing me that I am at least looking at the right documentation and that there is nothing better at the moment.

Can you or someone knowledgeable confirm if my conclusions are correct;

  1. All available properties are here and for rect_params here (pyds.NvOSD_RectParams - and text_params here (pyds.NvOSD_TextParams -

  2. osd_sink_pad_buffer_probe function is called for each pgie and sgie in the pipeline (as they are all added to the metadata). This means that it will cycle through each one and each answer; e.g. car color, car type, what object is found in the property “obj_label”.

  3. You can use this loop through “while l_obj is not None” to find all the PGIE and SGIE information. For example in the deepstream-test2 example python code, this will give PGIE information on the object (e.g car), and then SGIE1 information on car color, SGIE2 on car make and finally SGIE3 on car type.

  4. The tracker information is in the NvDsObjectMeta property object_id.

If this is a good and accurate summation, then I’ll write a simple python script in this topic and mark as solution for others to be able to easily use in their projects.

it had been casted to pyds.NvDsObjectMeta

Is there a simple python loop or commands from the API to print a list of the names of the variable to the CLI ?

–> you could refer to test3 python sample, function tiler_src_pad_buffer_probe
for how to retrieve metadata property. and do some customization accordingly. refer to above data structure.

yes, your conclusion is correct.

Thanks everyone. Now I understand what information is available. I’ll post some code later this week to help others.

Thank you and @amycao, here is a summary of the solution;

The metadata you can get per object recognition (regardless of pgie or sgie …) is given on each loop in the osd_sink_pad_buffer through the obj_meta variable. The list available is found in and

A summary (though not sure if complete) is the following;

“obj_meta.object_id” - Tracker ID
“obj_meta.class_id” - The object as defined in the models list
“obj_meta.rect_params” - See this
“obj_meta.rect_params.left” - Detected Area
“” - Detected Area
“obj_meta.rect_params.width” - Detected Area
“obj_meta.rect_params.height” - Detected Area
“obj_meta.confidence” - Confidence of the Detection
“obj_meta.text_params” - See this
“obj_meta.classifier_meta_list” - See this

Hi @amycao, thanks for your explanation. I am newbie in deepstream, i tried to insert every detection object in tiler_src_pad_buffer_probe function, but in several minutes, my script’s cpu usage is decreasing and RTSP Sink not broadcasting any frame.

My question is, What is the correct way to do database operations in the inner pipeline? Should i use Kafka first?


grab meta_data from a probe led to a slower video feed and also led to slower performance, Implementing compute functionality inside probe is not advisible as it is a blocking call.
For better performance you should implement a custom gstreamer plugin to achieve required functionality.

Thanks for your reply. Is there any python resources about storing metadata to database with gstreamer custom plugin?

Hello @tttuser, how do you link your database with deepstream in python implementation?

–> refer to this How to get the full list of obj_meta data available


Actually once you have the metadata from deepstream, you just use normal python to insert into a database. Depending on the database you use will depend on the code you implement.

As an example, for sqllite you could do the following in the tiler_sink_pad_buffer_probe function, in the “while l_obj is not None:” loop;

import sqlite3

objectID = obj_meta.class_id
confidence = obj_meta.confidence
y = int(
h = y + int(obj_meta.rect_params.height)
x = int(obj_meta.rect_params.left)
w = x + int(obj_meta.rect_params.width)

conn = sqlite3.connect("mydatabase.db")
c = conn.cursor()
if conn is not None:

    # Save information to Database 

    # Check if the relevant tables exist, if not create them
    sql_command = """CREATE TABLE IF NOT EXISTS events (
                                        id integer NOT NULL PRIMARY KEY AUTOINCREMENT UNIQUE,
                                        object_id integer NOT NULL,
                                        confidence real NOT NULL,
                                        x integer NOT NULL,
                                        y integer NOT NULL,
                                        h integer NOT NULL,
                                        w integer NOT NULL,
                                        time_event text NOT NULL

    # Add new event

    sql_command = """INSERT INTO event (object_id, confidence, x, y, h, w, time_event) VALUES ({a1}, {a2}, {a3}, {a4}, {a5}, {a6}, '{a6}')""".\
                  format(a1= object_id,\
                         a2= confidence, \
                         a3= x, \
                         a4= y, \
                         a5= h, \
                         a6= w, \

    # Commit the changes to db		

    # Close the connection

thanks for you reply. i got it

thank you, i’ll see your reference