Delay in one of the multistream tiler outputs

Hardware: Jetson Xavier NX
Deepstream 5.0

Hi, I have a deepstream pipeline running on a Jetson Xavier NX device. The pipeline is in summary → RTSP sources (2) + UriDecodeBin + StreamMux + NVInfer + MultiStreamTiler + NVOsd + EGLOutput.

The problem is that after some time (2-3 days), I have a delay of 4-5 seconds in one of the streams, in the second one especifically.

It would ve undertandable to have a delay in both stream cause the system is not able to process all frames at the maximum rate, but this is not the case since one of the stream doesn’t have any delay comparing to the source…

Any idea of what could be happening here? How does the deepstream queues work when you have multiple RTSP sources?

Thank you!

For rtsp sources, the pipeline run in live mode. So deepstream will not check the timestamp of the input streams. The streams will be handled right after deepstream received them. So you may need to check the source itself for the reason of delay.

Thanks for the quick response.

I’ve already checked the source and none of the RTSP streams have delay…

One thing that happend in the past was that deepstream was not able ro process 2 rtsp streams at 30fps and it started adding delay, but the delay was added in the same degree in both streams. Then I modified the drop-frame-interval property in order to discard frames (now is set to 5 so I’m reducing a lot the load) and it appears to work. And now is when I see this strange behaviour…

It seems like the system is enqueuing the frames when (for any reason) it is not able to process them at the appropriate rate.

It is possible to flush the deepstream pipeline queue? And what abaout checking the number of elements it has at a certain moment, is it possible?

Have you set nvstreammux property “is-live=1”? If so, deepstream will not enqueue frames. You may also try to set nvinfer property “interval” to do less inference.

The inference is not the problem, casue when I remove the Inverence from the pipeline the system is not yet able to process 30 fps RTSP source.

Are you sure the is-live parameter works on RTSP streams? I mean , we’ve checked with it at 1/0 and the behaviour is the same, it starts increasing the delay when it is not able to process such a high fps rate… Which is strange.

Yes.

Have you also enable “sync=0” with eglglessink? What is the rtsp stream resolution?

Yes, sync from nveglglessink is set to 0. Current resolution is 1280x720

There is no synchronization of streams in deepstream. All parameters for live streams are set well.

Would it be possible for you to share a sample pipeline where the system is able to process a 30 fps rtsp source in real-time? Without accumulating delay

I’m not able to create a pipeline that processes more than 14 frames in (True) real-time in a Jetson Xavier NX device.

Thank you!

Which application are you using? Have you measured your model’s performance? Can you measure the component delay as DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums?

I’m using a custom python application based in apps like deepstream_test_2.py example for instance). The model’s performance doesn’t seem to be the problem cause when I remove the inference plugin from the pipeline, the app is still not able to process more than 14-15 fps… Tha problems seems to be related with the RTSP proccessing.