How to keep the bounding boxes when interval is NOT zero?

The bounding box will only appear on nth frame when I set interval = n, n!=0.

How can I keep the bounding boxes on every frame?

If you set the interval parameter, some frames will not be reasoned, so there will be no bbox information. If you want to keep the bounding boxes on every frame, please do not set that parameter.

Is there any alternative way to draw the bbox, such as call back function to implement the feature?

I hope the official can provide similar functions. It is very troublesome to implement it yourself.

My implementation method, the following is pseudo code, for reference only

class ObjMateCache:
	border_width: int
	text:str
	...

class FrameMateCache:
    source_id:int
    objMateCaches:list[ObjMateCache]
    ...

last_frame_boxes = {}

def pgie_pad_buffer_probe(pad, info):
    gst_buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer ")
        return
    
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    l_frame = batch_meta.frame_meta_list
	
		
	if frame_meta.bInferDone: #  if Inference frame
		while l_frame is not None:
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
            current_obj = frame_meta.obj_meta_list
            frame_obj = FrameMateCache()
            frame_obj.source_id = frame_meta.source_id
            ...
            while current_obj is not None:
                # save your bbox objects
                bbox_obj = ObjMateCache()
                bbox_obj.border_width = current_obj.border_width
                frame_obj.objMateCaches.append(bbox_obj)

                ...
            # save your frame objects
		    last_frame_boxes[frame_meta.source_id]= frame_obj # You need to implement a structure like   
	
	else: # It's not an inference frame
        frame_cache = last_frame_boxes[frame_meta.source_id]
		for bbox_obj in frame_cache.objMateCaches:
			obj_meta = pyds.nvds_acquire_obj_meta_from_pool(batch_meta)
			obj_meta.rect_params.border_width = bbox_obj.border_width
			...
			pyds.nvds_add_obj_meta_to_frame(frame_meta, obj_meta, None)

We can support the method of adding the display meta to the frame to draw the bbox currently. You can refer to the display_meta in our code and just modify the display_meta.rect_params[0].
Or you can just add a tracker plugin after the pgie. The tracker also adds the bbox itself.

Is there any demo code avaliable for reference?

Could you attach your whole pipeline here? In theory, all you need to do is add a tracker plugin after the pgie plugin.

I have deepstream-YOLO configurations:

A: jetson-fpv/utils/dsyolo/yolov8n_infer_primary.txt at main · SnapDragonfly/jetson-fpv · GitHub

==> If I set interval = n, n!=0. Then bounding box will only appear on nth frame.

B: jetson-fpv/utils/dsyolo/yolov8n_infer_primary_BYTETrack.txt at main · SnapDragonfly/jetson-fpv · GitHub

==> If I set interval = n, n!=0. Then bounding box will also only appear on nth frame.

Any suggestions?

You can try to add a tracker in your source_config_yolov8n.txt file.

I think I’m close to the objective when nvDCF is enabled, the bounding boxes are always there.

But … … I got below output and the FPS is low.

**PERF:  22.51 (22.27)
gstnvtracker: Unable to acquire a user meta buffer. Try increasing user-meta-pool-size
  • Source stream: 1080P@30FPS
  • yolov8 without nvDCF works fine, see below output, which matches 30FPS. And there is no such latency that I can feel.
**PERF:  42.01 (41.23)

Here is the configuration: jetson-fpv/utils/dsyolo/source_config_yolov8n_nvDCF.txt at main · SnapDragonfly/jetson-fpv · GitHub


Please help! Any idea about how to improve performance? I need to receive 1080P@60FPS streaming video for object tracking.

EDIT: If I want to track only three objects in labels.txt, how to ignore other objects?

  • person
  • car
  • bicycle

EDIT2: I tried deepstream nvDCF tracker, it doesn’t always have bounding boxes.

If you add a tracker, this will inevitably slow down the performance.

If you want to keep the bbox without slowing down the performance. You can only record the coordinates of the bbox yourself and then add them to the metadata. You can refer to deepstream_imagedata-multistream_redaction.py to learn how to draw the bbox on the image.

  1. Can I track only three classes of objects to improve performance?
  2. Why does setting interval = 5 in DeepStream with nvDCF cause the bounding boxes to flicker, while in the DeepStream YOLO framework, they do not?
  3. Do you mean that even if I set interval = 5 in the DeepStream YOLO framework’s configuration file, it will not affect the behavior of drawing bounding boxes on every frame?

You can try that with just detecting three classes of objects. But as I attached before, the tracker will inevitably slow down the performance.

I have tried with our deepstream-test2 sample, there are no flicker issue. Please try it out using the same configuration file.

Yes. You can draw anything on the image yourself without being affected by the nvinfer.

Which DS version are you using? I’m using DS6.3.

I’m running the sample with DS 7.1 on A40.

Thanks. I’ll try test-2 sample with same configuration on DS6.3. And get back to you later.


BTW, I found the code runs ok on Jetpack 5, but can’t work on 6.2.

It seems OK with DS7.1.

BTW, Is it OK with DS6.3?

Yes. But we recommend you use the latest version.

OK, I’m upgrading the system to 6.2, but there is a quite a bit of issues when upgrade from Jepack 5.1.4.

If you want to use DeepStream 7.1, we recommend Jetpack 6.1. Please refer to our Jetson model Platform and OS Compatibility to install the corresponding version.