Deepstream pipeline stops when rtsp camera gets disconnected from network, and receives an error message

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

The above link was a reply give to a rtsp connection issue.
It is mentioned that the deepstream-app continues to run even if 1 or more streams are broken.

when a camera is disconnected from network, the app continues to run for a while, and then gets prints this:
Warning: gst-resource-error-quark: Could not read from resource. (9): gstrtspsrc.c(5637): gst_rtspsrc_loop_udp (): /GstPipeline:pipeline0/GstBin:source-bin-02/GstURIDecodeBin:uri-decode-bin/GstRTSPSrc:source:
Unhandled return value -7.
Error: gst-resource-error-quark: Could not read from resource. (9): gstrtspsrc.c(5705): gst_rtspsrc_loop_udp (): /GstPipeline:pipeline0/GstBin:source-bin-02/GstURIDecodeBin:uri-decode-bin/GstRTSPSrc:source:
Could not receive message. (System error)
[NvMultiObjectTracker] De-initialized
[NvMultiObjectTracker] De-initialized

It is also mentioned in the link that the apps knows what to do if it receives an error message, what does that mean?
and also the pipeline stops if 1 or more cameras are disconnected from the network after few minutes.

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.1.1
• TensorRT Version TensorRT 8.4.1.5
• NVIDIA GPU Driver Version (valid for GPU only) 525.147.05
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) add multiple camera source, and disconnect them while the pipeline is running
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

  1. could you share the whole log and configuration file? Thanks!
  2. since DS6.1.1 is an old version, can you try the DS6.3?

I have created a topic for the same issue saying that parallel streammux doesnt work on deepstream-6.3, and i have received no proper support for the same.
That is the reason I am using 6.1.1

PGIE CONFIG:
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
custom-network-config=/opt/nvidia/deepstream/deepstream-6.1/nvodin23/models/assets/face/yolov4-tiny.cfg
model-file=/opt/nvidia/deepstream/deepstream-6.1/nvodin23/models/assets/face/yolov4-tiny_best.weights
model-engine-file=/opt/nvidia/deepstream/deepstream-6.1/nvodin23/face_gf_rtx.engine
labelfile-path=/opt/nvidia/deepstream/deepstream-6.1/nvodin23/models/assets/face/class.txt
force-implicit-batch-dim=1
batch-size=1
network-mode=2
process-mode=1
model-color-format=0
num-detected-classes=1
interval=5
secondary-reinfer-interval=15
gie-unique-id=2
output-blob-names=num_detections;detection_boxes;detection_scores;detection_classes
#scaling-filter=0
#scaling-compute-hw=0
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=/opt/nvidia/deepstream/deepstream-6.1/DeepStream-Yolo/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet
output-tensor-meta=0
network-type=0

[class-attrs-all]
pre-cluster-threshold=0.1
eps=0.2
group-threshold=1

############ SGIE CONFIG
[property]
gpu-id=0
net-scale-factor=1
model-engine-file=/opt/nvidia/deepstream/deepstream-6.1/nvodin23/models/assets/age/normal_age_gf_rtx.engine
labelfile-path=/opt/nvidia/deepstream/deepstream-6.1/nvodin23/models/assets/age/normal_age_label.txt
force-implicit-batch-dim=1
batch-size=1

0=FP32 and 1=INT8 mode

network-mode=2
input-object-min-width=5
input-object-min-height=5
process-mode=2
model-color-format=1
gpu-id=0
gie-unique-id=4
operate-on-gie-id=2
#operate-on-class-ids=0
#is-classifier=1
interval=0

output-blob-names=dense_2
classifier-async-mode=0
classifier-threshold=0.00001
process-mode=2
#scaling-filter=0
#scaling-compute-hw=0

output-tensor-meta=1
network-type=1
########################SGIE 2
[property]
gpu-id=0
net-scale-factor=1

model-engine-file=/opt/nvidia/deepstream/deepstream-6.1/nvodin23/models/assets/gender/gender_gf_rtx.engine
labelfile-path=/opt/nvidia/deepstream/deepstream-6.1/nvodin23/models/assets/gender/gender_label.txt
force-implicit-batch-dim=1
batch-size=1

0=FP32 and 1=INT8 mode

network-mode=2
input-object-min-width=5
input-object-min-height=5
model-color-format=1
gpu-id=0
gie-unique-id=5
operate-on-gie-id=2
operate-on-class-ids=0
output-blob-names=fc8

classifier-async-mode=0
classifier-threshold=0.0001
process-mode=2
#scaling-filter=0
#scaling-compute-hw=0
interval=0

output-tensor-meta=1
network-type=1

are you testing deepstream-app? could you share the cfg which is used to start deepstream-app. could you share a whole log? if the number of “nvstreammux: Successfully handled EOS…” printing equals to the number of sources. the app will exit.

This when one camera is not connected to the network while starting the pipeline, pushes few frames and the pipeline stops

ERROR: source : Could not open resource for reading and writing.
debugging info: gstrtspsrc.c(7893): gst_rtspsrc_retrieve_sdp (): /GstPipeline:pipeline0/GstBin:source-bin-00/GstURIDecodeBin:uri-decode-bin/GstRTSPSrc:source:
Failed to connect. (Generic error)
[NvMultiObjectTracker] De-initialized
[NvMultiObjectTracker] De-initialized
[ERROR push 334] push failed [-2]
[ERROR push 334] push failed [-2]

This when one camera gets disconnected while the pipeline is running, and after about say 10-15 minutes later, I get these logs

ERROR: source : Could not read from resource.
debugging info: gstrtspsrc.c(5705): gst_rtspsrc_loop_udp (): /GstPipeline:pipeline0/GstBin:source-bin-01/GstURIDecodeBin:uri-decode-bin/GstRTSPSrc:source:
Could not receive message. (System error)
[NvMultiObjectTracker] De-initialized
[NvMultiObjectTracker] De-initialized

I dont want the app to exit, as there are other streams running.

I am not using deepstream-apps, I am using deepstream-parallel-inference implementation in python(I have written the custom pipeline keeping the cpp reference)

please refer to bus_callback in opt\nvidia\deepstream\deepstream-6.3\sources\apps\sample_apps\deepstream-app\deepstream_app.c, you can let app no exit when receive error message.
for example, deepstream-app will not exit when printing “Failed to connect. (Generic error)”. here is my test.
12-13.txt (3.1 KB)

Letr me try!

Sorry for the late reply, Is this still an DeepStream issue to support?

How do I solve this in python?

please refer to this code, you can let app no exit when receive error message.

the code says quit when received an EOS?

yes, exit with EOS is a normal case. you can let the app not quit after receiving Gst.MessageType.ERROR.

This is my code… Even after using this my code still stops

bus = self.pipeline.get_bus()
dela = self.pipeline.get_delay()
lat = self.pipeline.get_latency()
msg = bus.timed_pop_filtered(
Gst.CLOCK_TIME_NONE,
Gst.MessageType.ERROR | Gst.MessageType.EOS
)

    if msg:
        t = msg.type
        if t == Gst.MessageType.ERROR:
            err, dbg = msg.parse_error()
            print("ERROR:", msg.src.get_name(), ":", err.message)
            if dbg:
                print("debugging info:", dbg)
        elif t == Gst.MessageType.EOS:
            print("End-Of-Stream reached")
        else:
            print("ERROR: Unexpected message received.")

did the app print “ERROR:” log? could you share the whole log?

when a camera is disconnected from network, the app continues to run for a while, and then gets prints this:
Warning: gst-resource-error-quark: Could not read from resource. (9): gstrtspsrc.c(5637): gst_rtspsrc_loop_udp (): /GstPipeline:pipeline0/GstBin:source-bin-02/GstURIDecodeBin:uri-decode-bin/GstRTSPSrc:source:
Unhandled return value -7.
Error: gst-resource-error-quark: Could not read from resource. (9): gstrtspsrc.c(5705): gst_rtspsrc_loop_udp (): /GstPipeline:pipeline0/GstBin:source-bin-02/GstURIDecodeBin:uri-decode-bin/GstRTSPSrc:source:
Could not receive message. (System error)
[NvMultiObjectTracker] De-initialized
[NvMultiObjectTracker] De-initialized

I have shared the log above.

There is no update from you for a period, assuming this is not an issue any more. Hence we are closing this topic. If need further support, please open a new one. Thanks.
“Could not receive message. (System error)” is from here, low-evel will send this error to upper-level. did your code receive the error after adding printing in bus_callback? if using local file, can bus_callback print normal log?