RTSP live source. Discard past frames from buffer and go for newest one

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi all,

I dealing with delay problems in deepstream when connecting to rtsp sources. I’m connecting to live sources and ir makes no sense to have a CUMULATIVE delay in the output video.

I tried to use the drop-frame interval but sometimes the delay is unavoidable.

Is there any way to ask deepstream to go always for the newest frame available? And discard the rest of frames associated with past timestamps?

I guessed the live-source property from nvstreammux plugin had that purpose but it is not working for me.

Thanks in advance.

1 Like

Can you fill the basic information? Especially what app are you using and the related config files.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Sure, sorry.

**• Hardware Platform : Jetson Xavier NX
• DeepStream Version 5.0
**• JetPack Version (valid for Jetson only): L4T 32.4.4
• TensorRT Version 7.1.3.0
• NVIDIA GPU Driver Version (valid for GPU only)

I’m using de python deepstream environment to run a custom app. Tha summary of the pipeline built is:
→ UridecodeBin →
→ Streammux →
→ NvInfer (Yolo v4 model) →
→ MultistreamTiler →
→ NVOsd →
→ EGLOutput

I’m connecting to 2 rtsp live sources. I want the system to try to run the pipeline always over the newest frame. Now it appears to be buffering all frames although it is processing them slower than we receive them. And this is causing a buffer size increase and eventually the app freezes…

Have you checked the GPU loading when running your app? You can use “tegrastats” to monitor the GPU status.

You can set “sync” property to 0 for egl sink plugin for testing.

Yes GPU is jumping from a low value to 99% all the time.

What does the sync property do?

“sync” will force the sink to render gstbuffer according to timestamp. Your pipeline is in full loading, the timestamp will never catch up with timeline.

The only solution is to change “interval” of nvinfer to larger value untill the inference module can handle the data in frame interval. Or else, the delay will always be there. The delay is caused by the slow speed of inference. Either you change the model to a faster one, or you send less data to inference module.

Ok, so there is no way implemented to delete buffered frames after a period of time, when the difference with the current timestamp is higher to a certain value or whatever, right?

When the processing speed is lower than frame rate, either skipping the processing or dropping the frames can work.

1 Like

I removed the part of the inference from my pipeline and the system is still not able to process more than 20 frames per second (10 fps per stream as I have 2 rtsp stream sources). So the inference is not the bottleneck and it seems that the jetson device is the bottleneck. Do you have figures about the performance of deepstream on jetson Xavier NX or Jetson Nano for decoding-encoding rtsp sources?

It is very important for a project we are now carrying out. We expected much higher performance as deepstream is able to handle perfectly hundreds of fps over mp4 file for instance.

Jetson Nano is low-end device. We have some performance data for different Jetson devices. Performance — DeepStream 6.1.1 Release documentation

Please enable the max clock before you run you performance cases.
sudo nvpmodel -m 0
sudo jetson_clocks

Thank you, but all tests there are done using mp4 files which I’ve also tested and I know they work as expected for each jetson device.

Our problem is only when we use RTSP sources. Is it normal such a high reduction in the performance?

As I was telling you, our whole project depends on this performance and it would be really nice to have your help for solving it.