My pipeline always alerts that “Decoder is producing too many buffers”

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)Jetson Orin Nano 4G

• DeepStream Version6.2

• JetPack Version (valid for Jetson only)5.1.4

• Issue Type( questions, new requirements, bugs)questions

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

I modified the code based on the deepstreamtest4 demo

My problem is that when my single model detects 8 RTSP streams, there is an alarm that the decoder is producing too many buffers, and then there is a significant delay in the pipeline after running for 5 minutes

8 RTSP streams are connected to Streammux, and the pipeline structure is: nvurisrcbin–>videorate–>nvstreammux–>nvinfer–>queue1–>nvvideoconvert–>queue2–>capsfilter–>queue3–>fakesink. Why set the videorate? It is because when multiple models detect the same RTSP stream, the stream is split by TEE, and then the same stream can achieve different frame extraction intervals for different models by controlling “max rate” and “drop only”**

my pipeline log:
log1.txt (28.8 KB)

I would greatly appreciate it if all the experts and official professionals could help me solve this problem. Thank you

1.Did you set live-source property of nvstreammux to true ? If you are trying to adjust the frame rate, the batched-push-timeout parameter also needs to be adjusted, refer to this FAQ

  1. You can try setting the interval property of nvinfer to control the inference interval

3.Try to measure the latency of pipeline. It may also be that the delay is caused by the high loading of GPU/Memory.

@junshengy

I have tried everything you said completely, but it didn’t work.
My specific settings are as follows:

decode_bin= gst_element_factory_make("nvurisrcbin", "nv-uri-src-bin");
g_object_set(G_OBJECT(decode_bin), "uri", uri, NULL);
g_object_set(G_OBJECT(decode_bin), "rtsp-reconnect-interval", 60, NULL);
g_object_set(G_OBJECT(decode_bin), "num-extra-surfaces", 16, NULL);
g_object_set(G_OBJECT(decode_bin), "udp-buffer-size", 524288 * 2, NULL);
g_object_set(G_OBJECT(decode_bin), "latency", 150, NULL);
g_object_set(G_OBJECT(decode_bin), "drop-frame-interval", 25 / detect_interval, NULL);

detect_interval=1, The meaning is to take one frame every 25 frames and drop all the rest

streammux= gst_element_factory_make("nvstreammux", streammux_name);
g_object_set(G_OBJECT(streammux), "live-source", TRUE, NULL);
g_object_set(G_OBJECT(streammux), "batch-size", 8, NULL);
g_object_set(G_OBJECT(streammux), "width", 640, "height", 640, NULL);
g_object_set(G_OBJECT(streammux), "batched-push-timeout", 40000, NULL);
pgie = gst_element_factory_make("nvinfer", NULL);
g_object_set(G_OBJECT(pgie), "config-file-path", pgie_config, NULL);
g_object_set(G_OBJECT(pgie), "filter-out-class-ids", filter_ids.c_str(), NULL);
g_object_set(G_OBJECT(pgie), "interval", 0, NULL);

The interval is set to 0 here because I only detect one frame out of 25 frames, and decod_bin has dropped 24 frames, so only one frame has come. Therefore, I chose to detect all frames

sink = gst_element_factory_make("fakesink", NULL);
g_object_set(G_OBJECT(sink), "sync", 0, NULL);

But always alarm

0:29:36.340603705 1 0xfffe9802e920 Warning v4l2videodec gstv4l2videodec.c:1353:gst_v4l2_video_dec_loop:<nvv4l2decoder4> Decoder is producing too many buffers
0:29:36.340615161     1 0xfffe9802e920 WARN             v4l2videodec gstv4l2videodec.c:1353:gst_v4l2_video_dec_loop:<nvv4l2decoder4> Decoder is producing too many buffers
0:29:36.340625306     1 0xfffe9802e920 WARN             v4l2videodec gstv4l2videodec.c:1353:gst_v4l2_video_dec_loop:<nvv4l2decoder4> Decoder is producing too many buffers
0:29:36.340638170     1 0xfffe9802e920 WARN             v4l2videodec gstv4l2videodec.c:1353:gst_v4l2_video_dec_loop:<nvv4l2decoder4> Decoder is producing too many buffers
0:29:36.340648635     1 0xfffe9802e920 WARN             v4l2videodec gstv4l2videodec.c:1353:gst_v4l2_video_dec_loop:<nvv4l2decoder4> Decoder is producing too many buffers
0:29:36.340660060     1 0xfffe9802e920 WARN             v4l2videodec gstv4l2videodec.c:1353:gst_v4l2_video_dec_loop:<nvv4l2decoder4> Decoder is producing too many buffers

1.This problem may also be caused by the video frame timestamp disorder/frame cannot be decoded correctly, resulting in frame loss.The latency setting of 150ms is probably too low. Try increasing it.

2.Try using TCP by setting select-rtp-protocol=4

@junshengy
Hello, while I was working on resolving this issue, I discovered a memory leak problem with the drop frame interval parameter in nvurisrcbin. When I set the drop frame interval parameter, the memory quickly leaked, with 2GB leaked in two days. When I comment out this parameter, the memory will not leak. My Deepstream version is DeepStream 6.2, and my device is Jetson Orin Nano.

I am not sure if it is due to a specific version or other reasons, as I can avoid memory leaks by not using this parameter. However, I am reporting this issue to you. If you solve the problem, please let me know. Thank you.

nvurisrcbin contains elements such as rtspsrc/nvv4l2decoder. drop-frame-interval is only effective for nvv4l2decoder.

Can you use valgrind to test on the latest version and provide more information? If you have more information, please open a new topic to discuss this issue so that we can focus on it.

@junshengy

This is my new post, which includes AddressSanitizer and Valgrind logs. Although someone told me to update to version 7, for some devices, updating to version 7 is not possible.