If DS tries to process too many video streams, will the program eventually discard intermediate frames or will it crash?

As the title shows, my goal is to use DS to build a neural network inference system that processes multiple video streams. However, limited by neural network inference speed and GPU computing power, the number of video streams that the system can process at the same time is limited. This limitation will be reflected in the output FPS information. When the number of video streams received at the same time is too large, or the video is more complex (for example, the primary-gie processes more targets to be processed by the primary-gie, and too many primary-gie operations will slow down the processing speed). Processing will be difficult to achieve real-time (may even output video only 5FPS, while the original video is 25FPS).

Therefore, what I want to know is that when my input is an RTSP stream, because it is difficult to achieve real-time, the video stream that can be processed in the future will be accumulated for future processing. If this accumulation continues for a while, does the default processing logic of DS discard unprocessed frames, or will the program crash because the buffer is full?

Or, if I want to confirm the effect of this part, what part of the open source code should I read? My program is recommended to be based on the sample program deepstream-test5.

Thank you!

Hi @yhtxud,
There is a pool in nvstreammux, if the inference component, i.e. nvinfer, can not process the frame in time, the pool will be full, and since the source is RTSP stream (real time stream), the src will discard the frames that can’t push into the pool in nvstreammux until there is free space in the pool for new frame
Hope this answer your question.

Thanks!

Hi mchi,

My answer is solved perfectly, thank you for your reply!