VLC cannot display more than 3 RTSP streams output by different apps

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) RTS 2070S
• DeepStream Version 5.0 DP
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.0.0.1
• NVIDIA GPU Driver Version (valid for GPU only) 440.1
• Issue Type( questions, new requirements, bugs) bugs?
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I have 4 deepstream-apps which are based on deepstream-test5 app, runing on one server (one RTX 2070S 8GB GPU), they are deployed with 5.0 DP docker container, each app inputs with one rtsp stream, and output one rtsp stream(RTSP sink).

When I display the output rtsp streams via VLC one by one, three of four are normal videos, the last one always black screen. No matter how I change the order of app start, the last one always fail to streamming.

I’ve check the last one app container’s status, it always up for servel seconds and then restart again automatically. So I think maybe the app doesn’t start successfully, and I checked the log of the app:

ERROR frin subj_sub_bin_encoder1: Device ‘/dev/nvhost-msenc’ failed during initialization.
Call to S_FMT failed for YM12 @ 1280x720: Unknown error -1

What does this message mean? How can I fix that?

Some information might be useful:

  • nvidia-smi

  • app configuration
[application]
enable-perf-measurement = 1
perf-measurement-interval-sec = 5

[tiled-display]
enable = 0
rows = 2
columns = 2
width = 1280
height = 720
gpu-id = 0
nvbuf-memory-type = 0

[sink0]
enable = 0
type = 1
sync = 0
source-id = 0
gpu-id = 0
nvbuf-memory-type = 0

[sink1]
enable = 1
type = 4
codec = 1
enc-type = 0
sync = 0
qos = 0
bitrate = 1500000
profile = 0
rtsp-port = 8554
udp-port = 5400
width = 1280
height = 720

[sink2]
enable = 1
type = 6
msg-conv-config = dstest5_msgconv_sample_config.txt
msg-conv-payload-type = 257
msg-broker-proto-lib = /opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_kafka_proto.so
msg-conv-msg2p-lib = /opt/nvidia/deepstream/deepstream-5.0/sources/libs/nvmsgconv/libnvds_msgconv.so
msg-broker-conn-str = 10.168.1.172;9092;test
topic = test

[message-converter]
enable = 0
msg-conv-config = dstest5_msgconv_sample_config.txt
msg-conv-payload-type = 0

[message-consumer0]
enable = 0
proto-lib = /opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_kafka_proto.so
conn-str = <host>;<port>
config-file = <broker config file e.g. cfg_kafka.txt>
subscribe-topic-list = <topic1>;<topic2>;<topicN>

[osd]
enable = 1
gpu-id = 0
border-width = 1
text-size = 15
text-color = 1;1;1;1;
text-bg-color = 0.3;0.3;0.3;1
font = Arial
show-clock = 0
clock-x-offset = 800
clock-y-offset = 820
clock-text-size = 12
clock-color = 1;0;0;0
nvbuf-memory-type = 0

[streammux]
gpu-id = 0
live-source = 1
batch-size = 1
batched-push-timeout = 40000
width = 1280
height = 720
enable-padding = 0
nvbuf-memory-type = 0

[primary-gie]
enable = 1
gpu-id = 0
batch-size = 1
network-mode = 1
bbox-border-color0 = 1;0;0;1
bbox-border-color1 = 0;1;1;1
bbox-border-color2 = 0;1;1;1
bbox-border-color3 = 0;1;0;1
nvbuf-memory-type = 0
interval = 0
gie-unique-id = 1
model-engine-file = yolov4-hat.engine
labelfile-path = hat_labels.txt
config-file = config_infer_primary_yoloV4.txt

[tracker]
enable = 1
tracker-width = 600
tracker-height = 288
ll-lib-file = /opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
gpu-id = 0
enable-batch-process = 0

[tests]
file-loop = 0

[source0]
enable = 1
type = 4
uri = rtsp://admin:ldsw1234@10.168.1.248:554/h264/ch1/main/av_stream
num-sources = 1
gpu-id = 0
cudadec-memtype = 0
smart-record = 1
smart-rec-video-cache = 30
smart-rec-duration = 5
smart-rec-start-time = 3
smart-rec-interval = 5
smart-rec-dir-path = ./capture
  • htop

Problem solved, reference to Fiona’s reply: