Please provide complete information as applicable to your setup.
**• Hardware Platform (Jetson / GPU) - GPU
**• DeepStream Version - 6.3
**• TensorRT Version - not with machine currently to check
**• NVIDIA GPU Driver Version (valid for GPU only) - not with machine currently to check, fairly recent
**• Issue Type( questions, new requirements, bugs) - questions
**• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Hi all,
I have an issue that I’m hoping has a simple solution. I don’t think that this has been addressed elsewhere on the forum. I have a deepstream pipeline that ingest rtsp streams, does the usual mux, inference, track, demux and then outputs rtmp streams. I am using an nvurisrcbin
components on the input and using the rtsp-reconnect-interval
argument to facilitate re-connection if the input stream drops. This seems to work as intended on the input stream, but the pipeline still fails on the output stream. How should I properly configure post-demux components to ensure that the pipeline stays resilient even when the rtsp input stream drops? Note I am using the python bindings for deepstream.
For completeness here is roughly my pipeline:
nvurisrcbin -> muxer -> pgie -> tracker -> analytics -> nvstreamdemux -> nvvideoconvert -> nvdsosd -> nvvideoconvert -> h264parse -> flvmux -> rtmpsink
Any help is much appreciated!