I can’t comment much on the nvidia solution above, because on one hand my C skills are not that good and on the other, the app is not specifically designed to handle rtsp (network) errors but merely to add / remove error-free sources.
I somehow managed to handle reconnect in python using some hacks:
On the bus call, check message.src and if it’s from rtspsrc, don’t quit the loop
Upon detecting rtsp error, set the uridecodebin bin to NULL state, then call sync_state_with_parent() on this bin. This will discard previous state and start the connection process from scratch. Don’t forget to place your callbacks again for linking to nvstreammux. I’m not sure if it’ll work with a manually constructed decoding chain (instead of uridecodbin) but I’m pretty sure all elements in the chain (including the nvidia decoder) must be set to NULL.
Hello @dorin.clisu.ntt , i am facing the similar error and i am using python , i want to understand how to implement the mentioned solution . i am a newbie in deepstream , so can you guide me or can you provide a example for the following mentioned solution .
thanks
i applied this patch to deepstream_test5 version 4.0.2
but facing problem . whenever a source get off the subsequent sources in the pipleline also stops collecting frame in nvstreammux.
which forbids in collecting meta in analytics module.
Please provide the solution as i am in betweeen my project completion.
$ gst-inspect-1.0 nvstreammux
...
batched-push-timeout: Timeout in microseconds to wait after the first buffer is available
to push the batch even if the complete batch is not formed.
Set to -1 to wait infinitely
flags: readable, writable
Hi,
This is for DS4.0.2 and we have some modification for this in DeepStream SDK 5.0 GA and 5.0.1. Please try the latest release. If you still observe it on 5.0.1, please start a new post. Thanks.
The interpipe solution works really well, but why does the deepstream pipeline need to kept alive with dummy src video?
I have seen if i start my program without network and later add network, when the rtsp cameras connect the Deepstream pipeline seems to stay in the Ready state and never goes into playing. however if i have network at program start it all works normally, even disconnecting and reconnecting cameras after its running works. including disconnecting the network for a couple minutes.
Does the deepstream pipeline fail if there are no streams to feed it
The problem seems to be related to nvstreammux. We didn’t debug this enough to find the root, mainly because it is difficult to debug the problem without access to the sources. We are using the solution with dummy sources as a workaround.
Same here. Having access to nvstreammux and nvdsosd would have made our development for DeepStream projects easier. We needed to implement several DS meta functionalities in an external plugin that could’ve been implemented in nvdsosd.
But on the other hand, I really appreciate that we have the source for nvinfer. Thanks to that we were able to add support for encrypted models.
What I’ve tried:
Open one more thread to detect the network, set the network disconnection state to null, set the network connection to playing, but uride codebin will create many components…
I did the test,
Disconnect the network
Set the pipeline state to null
Continuously monitor the network and wait for the network to be connected
Set the pipeline status to playing
The program started successfully
This code works, but there are some problems. If in the fourth step, the network cable is pulled out when the pipeline is being restarted but failed (display interface), some components of pipeline are abnormal and will not enter null. After that, wait for a period of time, then insert the network cable, set the pipeline to playing, and the program will be interrupted (segment error)
I’d appreciate it if someone can provide some solutions for RTSP disconnection
essentially it allows you to decouple your pipelines, so you can have a camera-capture pipeline and a deepstream pipline
Then you can tell deepstream to listen to your camera pipeline, the cool thing about the interpipes is you can prevent them from forwarding EOS stream messages. Therefor the deepstream pipeline will not receive the cameras EOS
It also allows you to restart your camera pipeline if/when it gets an EOS