Deepstream didn’t work when some rtsp source invalid

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) AGX
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only) 4.3
• TensorRT Version 7.1
• NVIDIA GPU Driver Version (valid for GPU only) 10.2
• Issue Type( questions, new requirements, bugs)

when i add two rtsp stream on deepstream-app, when the two sources is valid it work well, but when one is valid and
the another is invalid , deepstream can not work well on the valid stream , there is no stream output.

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I use rtmp sink as the stream ouput.
Fri Aug 27 18:00:43 2021
**PERF: 0.00 (1.07) 0.00 (0.00)
Fri Aug 27 18:00:48 2021
**PERF: 0.00 (0.72) 0.00 (0.00)
Fri Aug 27 18:00:53 2021
**PERF: 0.00 (0.54) 0.00 (0.00)
Fri Aug 27 18:00:58 2021
**PERF: 0.00 (0.43) 0.00 (0.00)
Fri Aug 27 18:01:03 2021
**PERF: 0.00 (0.36) 0.00 (0.00)
Fri Aug 27 18:01:08 2021
**PERF: 0.00 (0.31) 0.00 (0.00)
Fri Aug 27 18:01:13 2021
**PERF: 0.00 (0.27) 0.00 (0.00)
Fri Aug 27 18:01:18 2021
**PERF: 0.00 (0.24) 0.00 (0.00)
Fri Aug 27 18:01:23 2021
**PERF: 0.00 (0.22) 0.00 (0.00)
Fri Aug 27 18:01:28 2021
**PERF: 0.00 (0.20) 0.00 (0.00)
Fri Aug 27 18:01:33 2021
**PERF: 0.00 (0.18) 0.00 (0.00)
** WARN: <watch_source_status:614>: No data from source 0 since last 60 sec. Trying reconnection
** INFO: <reset_source_pipeline:1155>: Resetting source 0
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
Opening in BLOCKING MODE
Opening in BLOCKING MODE
** INFO: <reset_source_pipeline:1155>: Resetting source 1
Fri Aug 27 18:01:38 2021
**PERF: 0.03 (0.20) 0.00 (0.00)
ERROR from src_elem1: Could not open resource for reading and writing.
Debug info: gstrtspsrc.c(7469): gst_rtspsrc_retrieve_sdp (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin1/GstRTSPSrc:src_elem1:
Failed to connect. (Generic error)

################################################################################

Copyright (c) 2019-2020, NVIDIA CORPORATION. All rights reserved.

Permission is hereby granted, free of charge, to any person obtaining a

copy of this software and associated documentation files (the “Software”),

to deal in the Software without restriction, including without limitation

the rights to use, copy, modify, merge, publish, distribute, sublicense,

and/or sell copies of the Software, and to permit persons to whom the

Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in

all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR

IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,

FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL

THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER

LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING

FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER

DEALINGS IN THE SOFTWARE.

################################################################################

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=0
rows=0
columns=0
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://admin:liantong123@192.168.0.80/Streaming/Channels/101
num-sources=1
gpu-id=0
nvbuf-memory-type=0

[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://admin:liantong123@192.168.0.88/Streaming/Channels/101
num-sources=1
gpu-id=0
nvbuf-memory-type=0

[source2]
enable=0
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://admin:liantong123@192.168.0.80/Streaming/Channels/101
num-sources=1
gpu-id=0
nvbuf-memory-type=0

[source3]
enable=0
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://admin:liantong123@192.168.0.80/Streaming/Channels/101
num-sources=1
gpu-id=0
nvbuf-memory-type=0

[sink3]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=0
source-id=0
gpu-id=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 7=RTMP
type=7
#1=h264 2=h265
codec=1
sync=1
#iframeinterval=10
bitrate=4000000
rtmp-location=rtmp://127.0.0.1:1935/live/capture0 live=1

[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 7=RTMP
type=7
#1=h264 2=h265
codec=1
sync=1
#iframeinterval=10
bitrate=4000000
rtmp-location=rtmp://127.0.0.1:1935/live/capture1 live=1

[sink1]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvoverlaysink 6=MsgConvBroker
type=6
msg-conv-config=msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM - Custom schema payload
msg-conv-payload-type=0
msg-conv-msg2p-lib=…/nvmsgconv-hf/libnvds_msgconv_hf.so
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_kafka_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str=10.0.0.31;9092;quickstart-events
#topic=
#Optional:
#msg-broker-config=…/…/deepstream-test4/cfg_kafka.txt

[osd]
enable=1
gpu-id=0
border-width=1
text-size=12
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=4
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000

Set muxer output width and height

width=1280
height=720
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0

config-file property is mandatory for any gie section.

Other properties are optional and if set will override the properties set in

the infer config file.

[primary-gie]
enable=1
gpu-id=0
model-engine-file=…/models/retinaface_mnet25_v2_dynamic.engine

labelfile-path=…/retinaface/labels.txt

batch-size=4
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=2
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_retinaface.txt

[tracker]
enable=1

For the case of NvDCF tracker, tracker-width and tracker-height must be a multiple of 32, respectively

tracker-width=608
tracker-height=608
#ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_iou.so
#ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_nvdcf.so
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
#ll-config-file required for IOU only
#ll-config-file=iou_config.txt
gpu-id=0

[tests]
file-loop=0

[nvds-analytics]
enable=1
config-file=config_nvdsanalytics.txt

This is my config, source1 is invalid stream, source0 is valide.

The deepstream-app is designed to work in this way.

Please check the bus_callback() function in /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-app/deepstream_app.c. When the bus received error message, the pipeline will quit.

If you need special processing for the error message, you can implement your own logic.

We hope that deepstream can work normally when there is a rtsp invalid. when we change rtmp sink to rtsp sink, it will work normally. we want the rtmp sink will work normally.

Thanks.

We don’t have such sample now.

I want to ask if I process 16 video streams, how can I push out the video stream ?do you have any suggest ? now i use rtmp sink to push out the stream.

It is possible to use sink from deepstream sample app: Frequently Asked Questions — DeepStream 5.1 Release documentation

The detail implementation can be found in function create_sink_bin inside sources\apps\apps-common\src\deepstream_sink_bin.c

What do you mean by “push out the video stream”? You can save the output video as video files, you can send the output video as network video streams, you can display the output video on screen, … What you can do dependents on your decision.

For rtmpsink, please refer to gstreamer document rtmpsink (gstreamer.freedesktop.org)