Excessive lost frames detected using appsink

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): Jetson AGX Xavier
• DeepStream Version: 5.0
• JetPack Version (valid for Jetson only): 4.4
• TensorRT Version: 7.1.3
• Issue Type( questions, new requirements, bugs): question

My pipeline:

Simplified:

                 queue -> appsink
               /
v4l2src -> tee1 
               \
                \             /  queue -> videosink
                 queue -> tee2       
                              \
                                 queue -> filesink

Code for appsink:

[...]
/* The appsink has received a buffer */
static GstFlowReturn new_sample(GstElement *sink, gpointer *data)
{
   GstSample *sample;

   /* Retrieve the buffer */
   sample = gst_app_sink_pull_sample(GST_APP_SINK(sink));

   if (gst_app_sink_is_eos(GST_APP_SINK(sink)))
   {
       g_print("EOS received in Appsink********\n");
   }

   if (sample)
   {
       /* print a * to indicate a received buffer */
       g_print("*");
       sleep(1);
       gst_sample_unref(sample);
       return GST_FLOW_OK;
   }

   return GST_FLOW_ERROR;
}
[...]

int main(int argc, char *argv[])
{

[...]

g_signal_connect(appsink, "new-sample", G_CALLBACK(new_sample), NULL);

[...]

}

The use case: when a buffer arrive at appsink with metadata attached by nvinfer, the appsink read the metadata and if a condition is met, call function A, after X amount of time, call function B. I tried to simulate this with the sleep() function but doing that lead to many loss frame from the source and affect the result of other sinks in the pipeline:

v4l2src gstv4l2src.c:976:gst_v4l2src_create:<camera-source> lost frames detected: count = 28 - ts: 0:00:12.084989155

I think the reason why the issue happens is because the appsink couldn’t keep up the the source element due to the delay introduced (i.e., wait for X amount of time before calling function B), which is why we see v4l2src reports lost frames and prevent those lost frames to be received by other sinks (videosink and filesink). Is that correct?

I’m not sure how to solve this issue, any pointer would be appreciated.

With your codes, every time new_sample() is evoked, video buffer will be held in new_sample() for more than one second, so the whole pipeline will be blocked for more than one second. I don’t know your camera’s FPS, but the camera is a live device, when there is no video buffer available, it can not capture video. Please avoid long time consumption processing in the appsrc signal callback function, it will hold video buffer, which will block the whole pipeline.

1 Like

Hi @Fiona.Chen,

Thank you for the reply. I have some follow-up questions.

  1. My pipeline has 3 sinks (appsink, videosink, filesink), each has their own thread so why other sinks (videosink and filesink) still get blocked?

  2. If the issue is lack of video buffers, can I increase the number of video buffers to fix it?

  3. I understand that I need to make the appsink signal callback function faster, do you think using some sort of asynchronous function can fix the issue? I think of using the Glib Asynchronous API since DS pipeline uses Glib under the hood, but I don’t know how if there is a simpler way to getting my task done. To clarify, my use case is: when a buffer arrive at appsink with metadata attached by nvinfer, the appsink read the metadata and if a condition is met, call function A, after X amount of time, call function B. What I have in mind is shown below:

                 queue -> appsink (start asynchronous tasks)
               /
v4l2src -> tee1 
               \
                \             /  queue -> videosink
                 queue -> tee2       
                              \
                                 queue -> filesink

The video buffer from tee is duplicated from the original video buffer. When the duplicated buffer is blocked, the original buffer is also blocked. Since appsink is much slower(less than 1fps) than v4l2src(most camera will not work in 1fps mode), the queue of appsink branch will be full soon and blocked, then the tee will be blocked too. You need to guarantee all sink branches can consume the video buffers in time.
https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer-plugins/html/gstreamer-plugins-tee.html

Even with the asynchronous method, you still need to guarantee the video buffer be released in time or else to drop the buffers is the correct way for such live case.