Can nvurisrcbin try to reconnect on EOS?

• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 7.0
• JetPack Version (valid for Jetson only) 6.0
• TensorRT Version 10.4

Somehow I am getting EOS signal from some live rtsp streams, even with the reconnect property of the nvurisrcbin set. It stops my process for that camera and doesn’t reconnect again. Is there an easier way to try to reconnect on EOS than using the deepstream_python_apps/apps/runtime_source_add_delete/deepstream_rt_src_add_del.py at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub this code as an example to implement adding or removing sources at runtime?Like some quick fix I can implement on bus call. I saw there is a new api in the new version of Deepstream, maybe it is might be worth to try it if I have to make some big modifications on the code.

Edit 1: Implementing runtime source addition and deletion may not be that hard, but I am asking just in case there is some quick fix.

do you mean nvurisrcbin will not reconnect after receiving EOS from rtspsrc? did you set ‘rtsp-reconnect-attempts’ and ‘rtsp-reconnect-interval’ for nvurisrcbin? could you share a deepstream running log?

Yes, I get nvstreammux: Successfully handled EOS for source_id= 3 for example from rtspsrc

These are the properties I set

Now that you mentioned it, I may have misunderstood the rtsp-reconnect-interval, if it is 0, it doesn’t reconnect?or does it immediately try to reconnect after not receiving a signal?

As shown in the doc, “rtsp-reconnect-interval” means “Timeout in seconds to wait since last data was received from an RTSP source before forcing a reconnection. 0=disable timeout”. You can set it to 10.

1 Like

Thanks, I had to set it to 120 to be safe, it was taking a while to connect to the camera so the forced re connection was disrupting the connection process. I believed it worked now. Lights seemed to have ran out on the place I was testing it and it reconnected automatically.

Sorry for the late reply, Is this still an DeepStream issue to support? Thanks!

No, I don’t think it is an issue anymore. After I set the timeout to be higher, it seems to be working properly most of the time now. There are times it fails (it is rare), I don’t know why yet, I saw a log with the segmentation core dumped message. I will have to dig further to see why it happened though.

I got some new info on why it doesn’t reconnect sometimes…When it failed today, I got this error (python3:250): GStreamer-CRITICAL **: 16:37:36.085: gst_buffer_get_size: assertion ‘GST_IS_BUFFER (buffer)’ failed

Have any ideas on how to avoid this?

It is a gstremer log, not DeepStrem log. could you use DeepStream sample only with “set the timeout to be higher” to reproduce this issue? if so, could you share a complete log? Thanks!

It seems to happen randomly, it is hard to reproduce the issue. The logs I got so far were:

  1. Killed

  2. Segmentation Fault (core dumped)

  3. python3:250): GStreamer-CRITICAL **: 16:37:36.085: gst_buffer_get_size: assertion ‘GST_IS_BUFFER (buffer)’ failed

The jetsons stay 24/7 running an inference pipeline so it may have something to do with that. I was thinking if a routine to clean cuda cache or something similar might help…..

there is not enough information to fix this issue. what is the device model? how many sources are you testing? what are the resolution, fps and codec?
without any code modification in runtime_source_add_delete, can the issue be reproduced? If only using nvurisrcbin in runtime_source_add_delete , not adding other code, can this issue be reproduced? if so, could you share the complete logs? Thanks!