How to run control fps of different streams processed when running deepstream-app?

I am currently using deepstream-test2 app with multiple input sources. We are currently dealing with a situation where we require only one of the stream and the other streams at a low fps since the objects in the stream don’t move that often. Ideally we want to a lot of frames on the other sources and we need it to run real-time on only one of the source. Is there a possiblity of dropping frames on the other sources so that we could add another stream to process with the app?
When running on longer duration we observed that there was a lag getting generated on all the sources. Hence wanted to drop frames from other sources without losing on detections. Any idea as to how to achieve this?

  1. Drop frame in nvinfer according to source id.

Refer to nvinfer->interval, which skip batch, you can change it to skip according to source id.
gst_nvinfer_process_full_frame()

/* Process batch only when interval_counter is 0. */
  skip_batch = (nvinfer->interval_counter++ % (nvinfer->interval + 1) > 0);

  if (skip_batch) {
    return GST_FLOW_OK;
  }

nvinfer source id is got in gst_nvinfer_sink_event()

  1. Drop frames in decoder.
    https://devtalk.nvidia.com/default/topic/1061492/deepstream-sdk/deepstream-sdk-faq/ → 9. How can we set “drop-frame-interval” more than 30 ?

@ChrisDing there’s another question unrelated to this. But on the nano when we run 4 streams we see the delay and frames are getting skipped. But on a video on youtube it’s running smooth for all 8 streams. I am not entirely sure as to why this is happening. We were running deepstream-test3-app to test with multiple sources.

What do you change for test3 ?
Did you try
$ deepstream-app -c source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt

@ChrisDing yes I tried both ways of running the test-3 app one is directly passing all the input sources as parameters to deepstream-test-3 app. And the other way was to run the deepstream-app using the config file. Both resulted in the same issue. Where we saw a huge lag in all the streams

Oh ok. So the config file has only object detection on 8 streams running concurrently. But the problem occurs when we run the deepstream-test3-app on the nano, since it has an object detector and 3 classifiers. So I guess this was causing the issue of the lag. I’m guessing the nano was unable to handle these many models simultaneously.

@ChrisDing could you explain as to where to make the change for the 1st solution which you shared. And the second solution is dropping frames at the decoder using drop-frame-interval. How can I do the change per source? Could you please brief a bit more on it

1st solution may have other issues. You can try 2nd solution.

@ChrisDing, I did give the second solution a try and it worked out fine for two streams. But the moment I have more than two streams, it never works. I get the following warning,

WARNING from element nvvideo-renderer: A lot of buffers are being dropped.
Warning: A lot of buffers are being dropped.
WARNING from element nvvideo-renderer: A lot of buffers are being dropped.
Warning: A lot of buffers are being dropped.

I ran it with 3 streams when I got that warning and see the display getting stuck.
The same when I run on the rtsp stream I see the display getting stuck for two streams itself.
I’m guessing it’s on the sink. Any idea as to how to fix it?

Thanks.

Following up at https://devtalk.nvidia.com/default/topic/1070027/drop-frame-interval-on-rtsp-stream-causes-the-stream-to-get-stuck-/