Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
GPU
• DeepStream Version
6.1
• TensorRT Version
8.2.5
• NVIDIA GPU Driver Version (valid for GPU only)
510.73.05
• Issue Type( questions, new requirements, bugs)
questions, new requirements
• How to reproduce the issue ?
• Requirement details
Here is the situation:
I have a pipeline with mutiple rtsp streams linked to one nvstreammux node, continued with one detection PGIE, and several SGIEs. The SGIE nodes sometimes have quite a large inference latency, causing framerate of the whole pipeline decreases. And to avoid corrupted frame (also described as mosaic / broken frames), I set drop-on-latency=0
for rtspsrc
.
Pipeline depicted as follows:
rtspsrc0 ----> nvv4l2decoder → queue → |
rtspsrc1 ----> nvv4l2decoder → queue → | => nvstreammux → pgie → sgie1 → sgie2 → … → some sink …
rtspsrcN ----> nvv4l2decoder → queue → |
My question is, how do I drop frames with a queue
(or something else) after nvv4l2decoder
. That is, set leaky = 2
and max-size-buffers
for queue
, so as to when queue is full, old frames would be dropped.
Actually, as I tested, if downstream elements have a large latency, nvv4l2decoder
would also push buffer at a low frequency. This should be the effect of LATENCY query or LATENCY event, but I don’t know for sure, and cannot figure it out.
The result is, the queue
after nvv4l2decoder
will never get full and thus will never drop buffers / frames. I tried to add videorate
element and framerate
caps after queue, but result is the same. I also tested dropping Latency Query or Latency Event with a probe, but not succeeded.
Another thing to mention, when replacing nvv4l2decoder
with avdec_h264
, the queue got full soon after start and overrun
callback is called.
I tested by setting GST_DEBUG to 6, greping logs of video decoder with keywords “Created new frame” or “pushing buffer”, and adding a callback to check queue size. The sgie latency is simulated with a fixed time sleep in custom plugin.
btw, I do know other ways like, setting drop-frame-interval
of nvv4l2decoder
, setting interval
of nvinfer
, setting sync=false
of all sinks, reducing inference time of SGIES, but they are not “dynamic”.
My question is exactly the same with nvstreammux not getting the latest frame.