The issue is that if a stream fails for sometime, and then comes back later - say after 2 mins - deepstream app is unable to reconnect with the stream . Another issue is that when all the streams fail,for a couple of seconds we get ERROR msg on the bus and the deepstream app tries to reconnect, but after a few secs, EOS message is thrown on the bus and the deepstream app quits itself. I have tried to increase this waiting period or have tried to restart the pipeline again after a few secs whenever the app goes to the ‘done’ state, but have not met with success.
We follow a different approach, not using DeepStream app but rather a GStreamer pipeline that uses DeepStream elements and is launched with GstD. You can check the following wiki page for some DeepStream pipeline examples with gst-launch and GstD:
Using GstInterpipes, GstD and a python client we can recover from errors on the RTSP streams without stopping the inference pipeline. The diagram of the solution looks something like this:
When an RTSP stream stops receiving buffers or fails, we replace it with a dummy stream that sends videotestsrc buffers, so that the DeepStream pipeline doesn’t fail.
In the bus callback, you can get the “src” property of the message to get the source of the message, and if it’s your rtsp source, you can handle the case differently, rather than quitting. I think then youll have to set the pipeline to a paused state, wait for that to complete, unlink the source, remove it from the pipeline, swap it out with another, link it, and restart the pipeline. That’s my plan anyway since I’m working on the same problem. My RTSP sources are flaky AF, which is actually great to test with. There may be a way to do it without pausing the pipeline but I can’t find one. There isn’t much information on this out there and those who have solved this issue aren’t likely to share it. Any advice from gstreamer experts would be welcome.
@miguel.taylor, for the time being I would like to stick to the default deepstream pipeline.
@mdegans, I was trying a similar approach as to what you mentioned, but I have been unsuccessful till now. Kindly let me know, if you were able to make any progress. Any help is greatly appreciated.
@DaneLLL, any update from your side ?
I have also tried flushing up the pipeline and then restarting it all over again whenever it goes to EOS. I am no gstreamer expert and have very little hands-on experience with it. I am clueless as to why I am unable to restart the pipeline and make it to work properly.
I will let you know what if I figure out a solution, neophyte1. Gstreamer has a steep learning curve but the tutorials help. Unfortunately there’s a lot you just have to figure out as well. If you haven’t already found the debug stuff, this page is very useful. You can also hook up your option parser to gstreamer’s so you can use --gst-debug-level at the command prompt. Frankly the .dot file has been a lifesaver for me. The ability to get a visual overview of your pipeline so you can see “aha. that’s not connected” is priceless.
Attach a patch that demonstrates reconnection in deepstream-app.
To enable reconnection, set source type to 4 and set rtsp-reconnect-interval-sec to the desired value. If no data is received within this set time, the app will force reconnection with the rtsp source.
reconnection.zip (4.47 KB)
I’ve the same problem. After 3-4 hours, deepstream app with RTSP source crashes getting this error:
WARNING from src_elem0: Could not read from resource. Debug info: gstrtspsrc.c(5293): gst_rtspsrc_loop_udp (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin0/GstRTSPSrc:src_elem0: Unhandled return value -7. ERROR from src_elem0: Could not read from resource. Debug info: gstrtspsrc.c(5361): gst_rtspsrc_loop_udp (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin0/GstRTSPSrc:src_elem0: Could not receive message. (System error) ERROR from src_elem0: Internal data stream error. Debug info: gstrtspsrc.c(5653): gst_rtspsrc_loop (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin0/GstRTSPSrc:src_elem0: streaming stopped, reason error (-5) ** INFO: <bus_callback:212>: Received EOS. Exiting ... Reset source pipeline reset_source_pipeline 0x7f5d062080 ,Quitting Opening in BLOCKING MODE NvMMLiteOpen : Block : BlockType = 261 NVMEDIA: Reading vendor.tegra.display-size : status: 6 NvMMLiteBlockCreate : Block : BlockType = 261 ERROR from src_elem0: Could not write to resource. Debug info: gstrtspsrc.c(5997): gst_rtspsrc_try_send (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin0/GstRTSPSrc:src_elem0: Could not send message. (System error) ERROR from src_elem0: Could not write to resource. Debug info: gstrtspsrc.c(8244): gst_rtspsrc_pause (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin0/GstRTSPSrc:src_elem0: Could not send message. (System error) App run successful
how can i solve it?
I applied the reconnection patch. I tried to run for 2 streams. After some time, I disconnected the network for a couple of minutes and then reconnected it. The pipeline resumed, but now the output from one of the streams became quite jittery and I began seeing this error on the terminal. -
NvMapMemCacheMaint:1075334668 failed 
How to resolve this?
Which deepstream SDK version is this patch applicable for?
It is for DS4.0.2.
Any updates regarding the above query ?https://devtalk.nvidia.com/default/topic/1070700/deepstream-sdk/deepstream-crashes-when-rtsp-fails-/post/5432708/#5432708
Is anyone able to explain what this means; from what component issues it and how to not get it :
NvMapMemCacheMaint:1075334668 failed 
We have team trying to reproduce the failure. Will update once there is progress.
The patch is verified before public post. Probably there is core cases not considered.
Any updates regarding the patch ?
Our teams don’t observe the issue on DS4.0.2 + reconnection.patch
A user also reports it in
We are checking with the user to have a test app for reproducing the failure.
For reproducing the isse, I would suggest to run deepstream reference app with the patch with 2-4 rtdp streams, and then disconnect the streams ( either switching off the network or something else). Don’t reconnect all the streams together at one time, but randomly.Kindly let me know if the issue can be resolved.
Hey @mdegans, have you been able to have a work around on it in any case?