I am modifying the deepstream_test_2.py example in the python examples of the deepstream 5.0 SDK.
I want to understand all the information available for external use like object_id (tracker), class_id (pgie), etc.
I know the data is pulled out into obj_meta from osd_sink_pad_buffer_probe and have successfully also extracted bbox data using obj_meta.rect_params.left/top/width/height.
But I have not found any documentation or ability to extract sgie1,2,3 data or to know all data available in pgie or tracker that I can pull out.
I want to be able to use this data to send to a DB and also for further external analytics.
I’m using deepstream 5.0 SDK on a Jetson Nano (the new one with 2 CSI ports).
For linking to the database, I have already done the links and implementation in python, though your links are very interesting.
My main problem remains how to get a list of data available in the obj_meta. I’m sure it is a matter of understanding the data structure (and getting the right terminology). Once python does the following;
obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
is obj_meta a dict, list, array or something else ??
I want to get a list of names of variables from the extracted metadata from the pipeline so that I can decide what data is useful. This list should be like;
"object_id"
"class_id"
"rect_params.left"
"rect_params.top"
"rect_params.width"
"rect_params.height"
"confidence"
"unique_component_id"
etc.
I have read your links before, but I do find it confusing and not fully clear. I understand that the metadata which is compiled when running through the pipeline holds the data in the NvDsObjectMeta and the child NvDsClassifierMeta and is cast out to pyds to ensure C code remains the owner of the memory. But I get stuck here.
Is there a simple python loop or commands from the API to print a list of the names of the variable to the CLI ?
Thank you @dorin.clisu.ntt. These are very good links and I did find them a few days ago. Between you and @Amycao you have been great in showing me that I am at least looking at the right documentation and that there is nothing better at the moment.
Can you or someone knowledgeable confirm if my conclusions are correct;
osd_sink_pad_buffer_probe function is called for each pgie and sgie in the pipeline (as they are all added to the metadata). This means that it will cycle through each one and each answer; e.g. car color, car type, what object is found in the property “obj_label”.
You can use this loop through “while l_obj is not None” to find all the PGIE and SGIE information. For example in the deepstream-test2 example python code, this will give PGIE information on the object (e.g car), and then SGIE1 information on car color, SGIE2 on car make and finally SGIE3 on car type.
The tracker information is in the NvDsObjectMeta property object_id.
If this is a good and accurate summation, then I’ll write a simple python script in this topic and mark as solution for others to be able to easily use in their projects.
Is there a simple python loop or commands from the API to print a list of the names of the variable to the CLI ?
→ you could refer to test3 python sample, function tiler_src_pad_buffer_probe
for how to retrieve metadata property. and do some customization accordingly. refer to above data structure.
Hi @Amycao, thanks for your explanation. I am newbie in deepstream, i tried to insert every detection object in tiler_src_pad_buffer_probe function, but in several minutes, my script’s cpu usage is decreasing and RTSP Sink not broadcasting any frame.
My question is, What is the correct way to do database operations in the inner pipeline? Should i use Kafka first?
grab meta_data from a probe led to a slower video feed and also led to slower performance, Implementing compute functionality inside probe is not advisible as it is a blocking call.
For better performance you should implement a custom gstreamer plugin to achieve required functionality.
Actually once you have the metadata from deepstream, you just use normal python to insert into a database. Depending on the database you use will depend on the code you implement.
As an example, for sqllite you could do the following in the tiler_sink_pad_buffer_probe function, in the “while l_obj is not None:” loop;
import sqlite3
objectID = obj_meta.class_id
confidence = obj_meta.confidence
y = int(obj_meta.rect_params.top)
h = y + int(obj_meta.rect_params.height)
x = int(obj_meta.rect_params.left)
w = x + int(obj_meta.rect_params.width)
conn = sqlite3.connect("mydatabase.db")
c = conn.cursor()
if conn is not None:
# Save information to Database
# Check if the relevant tables exist, if not create them
sql_command = """CREATE TABLE IF NOT EXISTS events (
id integer NOT NULL PRIMARY KEY AUTOINCREMENT UNIQUE,
object_id integer NOT NULL,
confidence real NOT NULL,
x integer NOT NULL,
y integer NOT NULL,
h integer NOT NULL,
w integer NOT NULL,
time_event text NOT NULL
);"""
c.execute(sql_command)
# Add new event
sql_command = """INSERT INTO event (object_id, confidence, x, y, h, w, time_event) VALUES ({a1}, {a2}, {a3}, {a4}, {a5}, {a6}, '{a6}')""".\
format(a1= object_id,\
a2= confidence, \
a3= x, \
a4= y, \
a5= h, \
a6= w, \
a7= datetime.now().strftime("%Y-%m-%dT%H:%M:%S")
c.execute(sql_command)
# Commit the changes to db
conn.commit()
# Close the connection
conn.close()