DeepStream 7.1 – RTSP sources show intermittent "Could not read/write from resource" errors

Hardware Platform:
GPU NVIDIA L40S
• DeepStream Version:
7.1
• TensorRT Version:
Aligned with DeepStream 7.1 recommendations
• Issue Type:
Questions
• Requirement Details:
Hello,

I am running a DeepStream 7.1 application on Kubernetes with a batch of 20 RTSP camera streams.
The pipeline runs correctly: inference is performed, output streaming continues without interruption, and the application itself seems stable.

However, in the logs I often see the following errors for some cameras (it seems random, not always the same ones):

ERROR from src_elem10: Could not read from resource.
Could not receive message. (Parse error)
ERROR from src_elem10: Internal data stream error.
streaming stopped, reason error (-5)
ERROR from src_elem10: Could not write to resource.
Could not send message. (Parse error)

Despite these errors, the pipeline keeps running and no visible issues appear in the output.

My questions are:

  1. What is the root cause of these messages? Are they related to unstable RTSP sources, decoder warnings, or something else?

  2. Since processing continues normally, can I safely ignore them, or do they indicate a potential hidden issue?

  3. Are there recommended configuration parameters (e.g. latency, num-retries, rtsp-reconnect-interval-sec) to reduce or suppress these messages?

I attach below the DeepStream configuration I am using for the sources and pipeline.

[source0]
enable=1
type=4
uri=<rtsp>
num-sources=1
latency=200
drop-frame-interval=2
rtsp-reconnect-interval-sec=30
select-rtp-protocol=4
camera-fps-n=15
camera-fps-d=1

... <N sources>

[sourceN]
enable=1
type=4
uri=<rtsp>
num-sources=1
latency=200
drop-frame-interval=2
rtsp-reconnect-interval-sec=30
select-rtp-protocol=4
camera-fps-n=15
camera-fps-d=1


[primary-gie]
enable=1
batch-size=20
gie-unique-id=1
labelfile-path=labels.txt
nvbuf-memory-type=0
config-file=config_infer_primary.txt

[sink0]
enable=1
type=4
sync=0
codec=1
nvbuf-memory-type=0
bitrate=4000000
iframeinterval=10
rtsp-port=8554
udp-port=5555
profile=0
udp-buffer-size=1000000
qos=0

[sink1]
type=6
enable=1
...

[streammux]
live-source=1
batch-size=20
batched-push-timeout=4000
width=960
height=540
enable-padding=1
nvbuf-memory-type=0
attach-sys-ts-as-ntp=0

[osd]
enable=1
border-width=1
text-size=1
text-color=1;1;1;1;
text-bg-color=0.7;0.7;0.7;0.8
font=Arial
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[tiled-display]
enable=1
rows=6
columns=6
width=960
height=540

[tracker]
enable=1
tracker-width=320
tracker-height=320
ll-lib-file=../libs/libnvds_nvmultiobjecttracker.so
ll-config-file=config_tracker_NvDCF_accuracy.yml
display-tracking-id=1

[nvds-analytics]
enable=1
config-file=config_nvdsanalytics.txt

Thanks in advance for your support!

The rtspsrc is open source, “Could not receive message.” error is printed by the rtspsrc subprojects/gst-plugins-good/gst/rtsp/gstrtspsrc.c · 1.20 · GStreamer / gstreamer · GitLab, subprojects/gst-plugins-good/gst/rtsp/gstrtspsrc.c · 1.20 · GStreamer / gstreamer · GitLab , …

You may enable the debug log of rtspsrc and compare with the rtspsrc source code to find out the root cause.

Since the pipeline is working, if the error is serious enough, you can receive the error reported by the bus callback. You can handle the error by yourself if you think it should be handled.

It depends on you and the actual environment, even depends on the RTSP server you want to work with.