As the title shows, my goal is to use DS to build a neural network inference system that processes multiple video streams. However, limited by neural network inference speed and GPU computing power, the number of video streams that the system can process at the same time is limited. This limitation will be reflected in the output FPS information. When the number of video streams received at the same time is too large, or the video is more complex (for example, the primary-gie processes more targets to be processed by the primary-gie, and too many primary-gie operations will slow down the processing speed). Processing will be difficult to achieve real-time (may even output video only 5FPS, while the original video is 25FPS).
Therefore, what I want to know is that when my input is an RTSP stream, because it is difficult to achieve real-time, the video stream that can be processed in the future will be accumulated for future processing. If this accumulation continues for a while, does the default processing logic of DS discard unprocessed frames, or will the program crash because the buffer is full?
Or, if I want to confirm the effect of this part, what part of the open source code should I read? My program is recommended to be based on the sample program deepstream-test5.