Setting up rtsp-reconnect-attempts and rtsp-reconnect-interval-sec at deepstream-imagedata-multistream

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU)**GPU RTX4080
• DeepStream Version7.1
• JetPack Version (valid for Jetson only)
• TensorRT Version8.9
I am using deepstream-imagedata-multistream interfacing 25 CCTVs.
I like to set up rtsp-reconnect-attempts and rtsp-reconnect-interval-sec for each rtsp source in the program. May I have sample code for that?

I tested the python app. Plug out the CCTV and plugin. The application doesn’t reconnect the cctv. In C++ deepstream, we can configure that function. How can I setup configuration to reconnect with reconnection attempts and interval.

You can use nvurisrcbin to do reconnecton. Please refer to this topic.

Streaming works only for uridecodebin for rtsp streaming from cctv.
When I changed to

    uri_decode_bin = Gst.ElementFactory.make("nvurisrcbin", "uri-decode-bin")
    if not uri_decode_bin:
        sys.stderr.write(" Unable to create uri decode bin \n")
    uri_decode_bin.set_property("rtsp-reconnect-interval", 5) 
    uri_decode_bin.set_property("rtsp-reconnect-attempts", -1)

Streaming from cctv has error as

Decodebin child added: nvv4l2decoder0 

Error: gst-stream-error-quark: Internal data stream error. (1): ../libs/gst/base/gstbasesrc.c(3127): gst_base_src_loop (): /GstPipeline:pipeline0/GstBin:source-bin-00/GstDsNvUriSrcBin:uri-decode-bin/GstRTSPSrc:src/GstUDPSrc:udpsrc3:
streaming stopped, reason not-linked (-1)
Camera id 0 is terminated
Exiting app

uridecodebin has no property “rtsp-reconnect-interval”.

please refer to this deeptream-test3 code which supports nvurisrcbin plugin.

Yes that is what I meant.
My code is as follows.

if file_loop:
        # use nvurisrcbin to enable file-loop
        uri_decode_bin=Gst.ElementFactory.make("nvurisrcbin", "uri-decode-bin")
        uri_decode_bin.set_property("file-loop", 1)
        uri_decode_bin.set_property("cudadec-memtype", 0)
    else:
        uri_decode_bin=Gst.ElementFactory.make("uridecodebin", "uri-decode-bin")
        uri_decode_bin.set_property("rtsp-reconnect-interval", 5) 
        uri_decode_bin.set_property("rtsp-reconnect-attempts", -1)
    if not uri_decode_bin:
        sys.stderr.write(" Unable to create uri decode bin \n")

My app reads from cctvs, not from file. rtsp-reconnect-interval and rtsp-reconnect-attempts can be set only for nvurisrcbin. uridecodebin has this error for rtsp-reconnect-interval TypeError: object of type GstURIDecodeBin’ does not have property rtsp-reconnect-interval'

nvurisrcbin has the following error for streaming from rtsp.
Error: gst-stream-error-quark: Internal data stream error. (1): ../libs/gst/base/gstbasesrc.c(3127): gst_base_src_loop (): /GstPipeline:pipeline0/GstBin:source-bin-00/GstDsNvUriSrcBin:uri-decode-bin/GstRTSPSrc:src/GstUDPSrc:udpsrc1:

So rtsp-reconnect-interval and rtsp-reconnect-attempts are only for streaming from files, not for streaming from cctvs?

only nvurisrcbin supports rtsp-reconnect-interval and rtsp-reconnect-attempts properties, please set the two properties after creating nvurisrcbin plugin.

Yes nvurisrcbin supports rtsp-reconnect-interval and rtsp-reconnect-attempts. But I have errors in streaming. Pls read my earlier comment.

If using test3 without any modificatons, can test3 run well with “–file-loop”? if so, Then you can set the two properties after creating nvurisrcbin plugin.

Same.

0:00:27.098956419    85     0x62b71530 INFO                 nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest3_pgie_config.txt sucessfully
Decodebin child added: source 


**PERF:  {'stream0': 0.0}

Nothing come out. And stop there.
I am using the docker. Do i need to install a lib into the docker?
My docker commands are

xhost +
docker run --gpus all -it --rm --entrypoint "" -v $PWD:/workspace --net=host --privileged -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -p 8888:8888 nvcr.io/nvidia/deepstream:7.0-triton-multiarch /bin/bash

I am not able to reproduce this issue. here is the log log-0709.txt (6.7 KB). Please refer to this link for how to start docker. then please refer to cmd to check the rtsp in docker container.

gst-launch-1.0 nvurisrcbin uri=rtsp://xxx ! fakesink

My docker command shouldn’t be issue. Have you tried in this docker
nvcr.io/nvidia/deepstream:7.0-triton-multiarch?

can the cmd in my last comment run well in your container? if no, could you share the complete logs? if so, please simplfy the test3 to narrow down the issue. for example, please check if “nvurisrcbin ->nvstreammux->nvinfer->…->nv3dsink” is fine.

This is my complete log.
log.txt (52.5 KB)
This is test3 code.

I mean the the gst-uanch cmd above, not the test3 log. Here is my test loggst-launch.txt (1.2 KB). first, plesse make sure the nvurisrcbin RTSP receiving is fine.

I have this error.

(deepstream) root@user-Nuvo-10000-Series:/workspace/opt/nvidia/deepstream/deepstream-7.0/sources/deepstream_python_apps/apps/deepstream-test3# gst-launch-1.0 nvurisrcbin  uri=rtsp://admin:nextan6423@172.16.158.244:554/cam/realmonitor?channel=1&subtype=0           
[2] 546
(deepstream) root@user-Nuvo-10000-Series:/workspace/opt/nvidia/deepstream/deepstream-7.0/sources/deepstream_python_apps/apps/deepstream-test3# Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Progress: (open) Opening Stream
Pipeline is PREROLLED ...
Prerolled, waiting for progress to finish...
Progress: (connect) Connecting to rtsp://admin:nextan6423@172.16.158.244:554/cam/realmonitor?channel=1
Progress: (open) Retrieving server options
Progress: (open) Retrieving media info
Progress: (request) SETUP stream 0
Progress: (open) Opened Stream
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Redistribute latency...
Progress: (request) Sending PLAY request
Redistribute latency...
Progress: (request) Sending PLAY request
Redistribute latency...
Progress: (request) Sent PLAY request
Redistribute latency...
Redistribute latency...
ERROR: from element /GstPipeline:pipeline0/GstDsNvUriSrcBin:dsnvurisrcbin0/GstRTSPSrc:src/GstUDPSrc:udpsrc1: Internal data stream error.
Additional debug info:
../libs/gst/base/gstbasesrc.c(3127): gst_base_src_loop (): /GstPipeline:pipeline0/GstDsNvUriSrcBin:dsnvurisrcbin0/GstRTSPSrc:src/GstUDPSrc:udpsrc1:
streaming stopped, reason not-linked (-1)
Execution ended after 0:00:00.333591920
Setting pipeline to NULL ...
Freeing pipeline ...

please check if it is because the urI contains some special characters. Please refer to this topic.

Yes you are right. I changed CCTV and now ok.

My new cctv has no special character so no Internal data stream error occured. Now is keep on reconnecting. But a new error is observed.

File "/workspace/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-imagedata-multistream/deepstream_imagedata-multistream.py", line 500, in cb_newpad
Decodebin child added: parser
    gststruct = caps.get_structure(0)
AttributeError: 'NoneType' object has no attribute 'get_structure'
Decodebin child added: src_cap_filter_nvvidconv

The error is coming from the following function.

def cb_newpad(decodebin, decoder_src_pad, data):
    caps = decoder_src_pad.get_current_caps()
    gststruct = caps.get_structure(0)
    gstname = gststruct.get_name()
    source_bin = data
    features = caps.get_features(0)

    # Need to check if the pad created by the decodebin is for video and not
    # audio.
    if (gstname.find("video") != -1):
        # Link the decodebin pad only if decodebin has picked nvidia
        # decoder plugin nvdec_*. We do this by checking if the pad caps contain
        # NVMM memory features.
        if features.contains("memory:NVMM"):
            # Get the source bin ghost pad
            bin_ghost_pad = source_bin.get_static_pad("src")
            if not bin_ghost_pad.set_target(decoder_src_pad):
                sys.stderr.write("Failed to link decoder src pad to source bin ghost pad\n")
        else:
            sys.stderr.write(" Error: Decodebin did not pick nvidia decoder plugin.\n")

it seems the original issue was fixed. could you open a new topic for the new issue? let’s focus one issue in one topic. Thanks!