Deepstream working difference between live source and stored video files

• Hardware Platform (Jetson / GPU) GPU (1080ti)
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.2.1
• NVIDIA GPU Driver Version (valid for GPU only) 450.102.04
• Issue Type( questions, new requirements, bugs) Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

I am using yolov4 model with deepstream5.0.0.

I want to know the working difference of deepstream with live rtsp source and stored video file.

I am able to run fine (10 stored h264 streams) file without any glitch but as soon i used 10 live rtsp source (have good bandwidth) 25fps,4096 bitrate,1920x1080 , output starts getting hanged and frames starts glitchy.

Is “batch-push-timeout” is using some cache of frame to be used when live frame is absent?

Are gst-pipelines are async or sync? Any dependency is there in between consumer and producer here?


Can Neeed clarity for batch-size and batched-push-timeout for rtsp source - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums answer your question?

There is buffering of frames with “batch-push-timeout” implementation.

GStreamer works in async way. I’m not sure what your consumer and producer mean, but for your rtsp case, deepstream application works as a rtsp client, the smoothness is not only decided by the performance of the deepstream pipeline, but also decided by the smoothness of the rtsp transferring.