DeepStream Python to get bounding boxes from detections

I’m using the Python Deepstream bindings and I’m aware it is still in Alpha phase. My hardware is an Nvidia Jetson Nano with Jetpack 4.2.2 and Python 3.
I’ve already used deepstream-test3 ( with my RTSP feed and it works perfectly, but now I have 2 questions:
First one is how this deepstream app draws the bounding boxes and labels on the output video.
The second one is how can I obtain the coordinates of the bounding boxes of the objects detected.
Thanks in advance!

  1. We have nvdsosd plugin, which can get bbox, labels from metadata and draw them on the original video.
  2. You can check test3, tiler_src_pad_buffer_probe() -> batch_meta/frame_meta/obj_meta

1 Like

All right! I’ll give it a try.

I declared mine in the tiler_src_pad_buffer_probe but why does the output come out as:

<pyds._NvOSD_RectParams object at 0x7f958d1688>

This is the code in declaring and printing

while l_obj is not None:
                # Casting to pyds.NvDsObjectMeta
                data = obj_meta.rect_params

Why is the object_meta.rect_params coming out as the address?

Yes, manbencharongkul, I’m in the same page. I don’t know why it is happening

To get actual bounding box coordinates, you need to add a probe on the source pad of nvinfer component and find bounding box coordinates which are present as part of object metadata

There’s an example here:
You can get bbox coordinates from obj_meta.rect_params.left/top/width/height.


Hey, I’d like the bounding box coordinates of objects detected from DashCamNet pre-trained model. I am using a Jetson Nano with DeepStream5.0. DashcamNet is running fine on the nano but I would like to extract the bounding box coordinates of the objects detected. Please guide me through this.

Hi zeeshanjafferi,

Please help to open a new topic for your issue. Thanks