How to drop frames with a queue after decoder before nvstreammux?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
GPU

• DeepStream Version
6.1

• TensorRT Version
8.2.5

• NVIDIA GPU Driver Version (valid for GPU only)
510.73.05

• Issue Type( questions, new requirements, bugs)
questions, new requirements

• How to reproduce the issue ?
• Requirement details

Here is the situation:

I have a pipeline with mutiple rtsp streams linked to one nvstreammux node, continued with one detection PGIE, and several SGIEs. The SGIE nodes sometimes have quite a large inference latency, causing framerate of the whole pipeline decreases. And to avoid corrupted frame (also described as mosaic / broken frames), I set drop-on-latency=0 for rtspsrc.

Pipeline depicted as follows:

rtspsrc0 ----> nvv4l2decoder → queue → |
rtspsrc1 ----> nvv4l2decoder → queue → | => nvstreammux → pgie → sgie1 → sgie2 → … → some sink …
rtspsrcN ----> nvv4l2decoder → queue → |

My question is, how do I drop frames with a queue (or something else) after nvv4l2decoder. That is, set leaky = 2 and max-size-buffers for queue, so as to when queue is full, old frames would be dropped.

Actually, as I tested, if downstream elements have a large latency, nvv4l2decoder would also push buffer at a low frequency. This should be the effect of LATENCY query or LATENCY event, but I don’t know for sure, and cannot figure it out.

The result is, the queue after nvv4l2decoder will never get full and thus will never drop buffers / frames. I tried to add videorate element and framerate caps after queue, but result is the same. I also tested dropping Latency Query or Latency Event with a probe, but not succeeded.

Another thing to mention, when replacing nvv4l2decoder with avdec_h264, the queue got full soon after start and overrun callback is called.

I tested by setting GST_DEBUG to 6, greping logs of video decoder with keywords “Created new frame” or “pushing buffer”, and adding a callback to check queue size. The sgie latency is simulated with a fixed time sleep in custom plugin.

btw, I do know other ways like, setting drop-frame-interval of nvv4l2decoder, setting interval of nvinfer, setting sync=false of all sinks, reducing inference time of SGIES, but they are not “dynamic”.

My question is exactly the same with nvstreammux not getting the latest frame.

I don’t think queue can help you to drop frames. queue (gstreamer.freedesktop.org)

Seems valve is what you are looking for. valve (gstreamer.freedesktop.org)

This is Gstreamer topic, please raise your topic in gstreamer community. GStreamer: Mailing Lists

OK, thanks. I will post to gsteamer mailing list later.

But as I tested,

when replacing nvv4l2decoder with avdec_h264, the queue got full soon after start and overrun callback is called.

They are both subclass of GstVideoDecoder, there must be some difference in latency processing.

The test code is attached here.
files listed below, please start with script run_test.sh.

latency_test/gstsleep.h
latency_test/latency_test.cpp
latency_test/gstsleep.cpp
latency_test/run_test.sh
latency_test/CMakeLists.txt

latency_test.tar.gz (6.8 KB)

Here is the output:
when Use nvv4l2decoder:

when use cpu decoder:

This problem is solved. It’s all about buffer pool size.

nvv4l2decoder use custom buffer pool with max-buffers default setting to 4, whereas queue element after decoder want to leak buffer when queue size reaches 10. As a result, queue will never be full, and decoder is always waiting for downstream elements to return its buffer.

Here is the solution, increasing nvv4l2decoder max-buffer size, by setting num-extra-surfaces to 10 or larger.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.