Sustain bounding boxes in DS 6.0

• Hardware Platform (Jetson / GPU) Tesla T4
• DeepStream Version 6.0
• TensorRT Version 8.0.1
• NVIDIA GPU Driver Version (valid for GPU only) 470.82.01

Hello, I’m working on solution based on DeepStream which will anonimize objects on video. I use YoloV4 as a detector and NvDCF tracker to sustain bounding boxes from detector. The problem is when detector doesn’t return box i.e. for one frame (false negative) the tracker also doesn’t return box (shadow tracking). It’s really important in anonimization task to get continuous anonimization without flickering box. I know there is something like past-frame data in tracker metadata but I get information about missing box too late, i.e. in 6th frame I get information that in 5th and 4th frames something was tracked in shadow mode but this frames I pushed downstream before. I was looking in documentation and I found that you suggest to add custom module.


Can you tell me how I should prepare this module? Maybe there is some template for that?
Thanks

1 Like

Seems you should buffer several video frames to handle past-frame data. So you will get some delay when display on the screen.

Ok, but can you tell me how can I do it? How should I make this delay?

Can you have a try to increase maxShadowTrackingAge?

Gst-nvtracker — DeepStream 6.0 Release documentation (nvidia.com)

But how it can help me? The object will be tracked longer in shadow mode but what it will give me? My point is to get bbox from shadow mode and modify frame with this box but from past-frame data I get information about that boxes too late. If I will get information in 8th frame that in 6th and 7th frames there was a box tracked in shadow mode I need to return to this frames to modify them but how? If I will increase maxShadowTrackingAge I will have to make bigger delay and thats it. And my question is how make this delay to push i.e. 6th frame downstream when I’m able to read past-data for 10th frame (maxShadowTrackingAge = 4).

Seems you need develop one plugin which buffer several video frames. The plugin will introduce some delay on the display.

Yes, this is what I write to you 7 days ago. I know that I have to prepare custom module to make some delay because you have this information in your documentation which I join to my first message. But for now I don’t know how I should prepare this module. How the logic of this module should look like? Can you help me with that? Thanks

What kind of guide do you need? GStreamer plugin develop guide or logic of buffer video frame or parser past-frame meta or apply BBox in past-frame meta to previous frame?

Do you have any video show your issue or anything to reproduce it in my side?

Can you also have a try with maxShadowTrackingAge=0 and earlyTerminationAge=0?

I don’t know how to make a delay in plugin. How I should store the buffers to don’t stop the deepstream and to not block any thread? Like I should store it in vector? Maybe you can give me some logic like: if maxShadowTrackingAge=5 store last 5 buffers in vector, check past-frame data and if there is box for the frame 5 buffers ago, draw the box and then push this buffers downstream and store current buffer in the vector (I know that this logic doesn’t work). Something like that. I will share video in PM. If I will set maxShadowTrackingAge to 0 I will not have any chance to sustain bbox right? If detector doesn’t return box the tracker will not track this box.

Any update?

Can you also have a try with maxShadowTrackingAge=0 and earlyTerminationAge=0? This setting will disable shadow tracking and then output BBox.

But I want to sustain bbox when detector return false negatvie for one or two frames and based on your documentation when there is not matched box from detector and also I set maxShadowTrackingAge=0 the tracked box will be terminated. It is active only when tracked box matched detected box.


But yes, I try maxShadowTrackingAge=0 and earlyTerminationAge=0 and it makes no difference. Based on your documentation using past-frame data is the only way to sustain bbox, so I still wait for a tips how should I do that.

Why this kind of buffer doesn’t work? You should also allocate more video frames if you buffered video frames.

I don’t know, when I tried this everything stacked. I think I have not enough knowledge about DS and I blocked something. I hoped you can tell me why it doesn’t work. And what you mean with allocate more video frames?

video buffer will be used with pooling mode. It will stuck if you buffered 4 video frames when the buffer pool only allocated
4 video frame in the buffer pool. So you need allocate more buffer if you need buffer video buffer.

Please try to enlarge “buffer-pool-size” if you use nvstreammux buffer pool.

Gst-nvstreammux — DeepStream 6.0 Release documentation (nvidia.com)

Still doesn’t work, I will send you in PM my code, maybe you will find the problem.

Is it work on your side?

No, as I said it doesn’t work, it stacked after 4 frames.

Can you have a try to buffer only one buffer? Can you add some log to ensure you send the buffer to downstream plugin after received the second buffer?

After few additional fixes it works with buffer only one buffer. I guess I block thread when I add buffer to vector, so if I have 4 threads I block whole pipeline when I buffer 4 frames. Do you know how I can free thread after adding buffer to vector? One or two frames are not enough to recive nice output and I don’t want to block any of thread cause it makes my pipeline slower.