Asynchronous video capture on deepstream 5.0 using rtsp streams

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)

I want to capture the latest frame coming from RTSP links regardless of pipeline throughput. Currently, if my pipeline throughput is lower than stream FPS, Deepstream uses the next immediate frame from the rtsp link instead of latest frame. I am ok with missing some frames(I always look for the latest frame).

I want the deepstream to skip the frames which are being capture during the process of other elements(PGIE infernce etc…). My original stream is of 25FPS, I want the final FPS to >= 10.
Looking for any open ideas. TIY.

1 Like

Do you mean you want to do inference in realtime? Are you using deepstream-app?

Yes, I want to do inference in real time. And no, I am using sample python app.

You need to measure your model speed first. If the model can only handle 5 FPS, it is no use for optimize deepstream to achieve 10 FPS.

Is the sink sync?

I’ll rephrase my question - independent of the model, whenever the model is done with processing the input frame, the model should be able the ingest the latest frame available from the RTSP stream. (and not the frame which is at the top of the queue - since this will be an older frame).

I am okay with losing out on frames due to this but don’t want an ever-increasing time lag due to this

Sync property is set to 0.
Documentation tells the following:
" 0: As fast as possible"

There are two places to drop the additional frames.

With nvv4l2decoder, Gst-nvvideo4linux2 — DeepStream DeepStream Version: 5.0 documentation, the property of “drop-frame-interval” will drop some frames.

With nvinfer plugin, Gst-nvinfer — DeepStream DeepStream Version: 5.0 documentation, the property of “interval” will skip some frames.