DeepStream Python to get bounding boxes from detections

Hello!
I’m using the Python Deepstream bindings and I’m aware it is still in Alpha phase. My hardware is an Nvidia Jetson Nano with Jetpack 4.2.2 and Python 3.
I’ve already used deepstream-test3 (deepstream_test_3.py) with my RTSP feed and it works perfectly, but now I have 2 questions:
First one is how this deepstream app draws the bounding boxes and labels on the output video.
The second one is how can I obtain the coordinates of the bounding boxes of the objects detected.
Thanks in advance!

  1. We have nvdsosd plugin, which can get bbox, labels from metadata and draw them on the original video.
  2. You can check test3, tiler_src_pad_buffer_probe() → batch_meta/frame_meta/obj_meta

https://docs.nvidia.com/metropolis/deepstream/plugin-manual/index.html#page/DeepStream_Plugin_Manual%2Fdeepstream_plugin_metadata.03.1.html

1 Like

All right! I’ll give it a try.

I declared mine in the tiler_src_pad_buffer_probe but why does the output come out as:

647
<pyds._NvOSD_RectParams object at 0x7f958d1688>

This is the code in declaring and printing

while l_obj is not None:
            try: 
                # Casting l_obj.data to pyds.NvDsObjectMeta
                obj_meta=pyds.glist_get_nvds_object_meta(l_obj.data)
                data = obj_meta.rect_params
                print(frame_number)
                print(data)

Why is the object_meta.rect_params coming out as the address?

Yes, manbencharongkul, I’m in the same page. I don’t know why it is happening

To get actual bounding box coordinates, you need to add a probe on the source pad of nvinfer component and find bounding box coordinates which are present as part of object metadata

There’s an example here: https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/blob/8b4b9547761437c7d204d6bd11e90bfbe9c04d5c/apps/deepstream-test4/deepstream_test_4.py#L291
You can get bbox coordinates from obj_meta.rect_params.left/top/width/height.

3 Likes

Hey, I’d like the bounding box coordinates of objects detected from DashCamNet pre-trained model. I am using a Jetson Nano with DeepStream5.0. DashcamNet is running fine on the nano but I would like to extract the bounding box coordinates of the objects detected. Please guide me through this.

Hi zeeshanjafferi,

Please help to open a new topic for your issue. Thanks