Dynamically adding rtsp to deepstream

I am trying to achieve the task mentioned below .
Camera sources will be added dynamically to both memory and the configuration file. This eliminates the need to restart DeepStream when adding new cameras. In case DeepStream restarts, all previously added cameras will be loaded from the configuration file automatically.
Currently working with the occupancy analytics project by NVIDIA (GitHub - NVIDIA-AI-IOT/deepstream-occupancy-analytics: This is a sample application for counting people entering/leaving in a building using NVIDIA Deepstream SDK, Transfer Learning Toolkit (TLT), and pre-trained models. This application can be used to build real-time occupancy analytics applications for smart buildings, hospitals, retail, etc. The application is based on deepstream-test5 sample application.)

I have compiled the project and sucessilly running it for multi stream rtsp but i want to add cameras dynamically to the running deepstreream insstance so that it rtsp will be updated in real time . I have reffered to this deepstream_python_apps/apps/runtime_source_add_delete/deepstream_rt_src_add_del.py at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub

(deepstream_reference_apps/runtime_source_add_delete/deepstream_test_rt_src_add_del.c at master · NVIDIA-AI-IOT/deepstream_reference_apps · GitHub)

but not able to get the desired out put .

System specification
Tue Feb 11 11:14:12 2025
±----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.142 Driver Version: 550.142 CUDA Version: 12.4 |
|-----------------------------------------±-----------------------±---------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA RTX A4000 On | 00000000:01:00.0 Off | Off |
| 46% 65C P2 44W / 140W | 7484MiB / 16376MiB | 0% Default |
| | | N/A |
±----------------------------------------±-----------------------±---------------------+

±----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 2950 G /usr/lib/xorg/Xorg 167MiB |
| 0 N/A N/A 3320 G …3/usr/bin/snapd-desktop-integration 31MiB |
| 0 N/A N/A 71207 G /usr/lib/xorg/Xorg 25MiB |
| 0 N/A N/A 565327 C python3 662MiB |
| 0 N/A N/A 565452 C python3 662MiB |
| 0 N/A N/A 565574 C python3 662MiB |
| 0 N/A N/A 565981 C python3 662MiB |
| 0 N/A N/A 687280 C python3 158MiB |
| 0 N/A N/A 695436 C python3 158MiB |
| 0 N/A N/A 696568 C python3 158MiB |
| 0 N/A N/A 713628 C python3 158MiB |
| 0 N/A N/A 759034 G /usr/bin/gnome-shell 112MiB |
| 0 N/A N/A 858458 C /usr/local/bin/a2f_pipeline.run 3282MiB |
| 0 N/A N/A 937132 G …irefox/5701/usr/lib/firefox/firefox 337MiB |
±----------------------------------------------------------------------------------------+

config file for deepstream occupancy app

################################################################################
# Copyright (c) 2018-2020, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portionsa of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
################################################################################

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=2
columns=2
width=1280
height=720
gpu-id=0

#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
#uri=file:///opt/nvidia/deepstream/deepstream-7.0/sources/apps/sample_apps/deepstream-occupancy-analytics/videos/video2.mp4
uri=rtsp://username:password@ipaddress:8888

camera-id=1
num-sources=1
gpu-id=0
nvbuf-memory-type=0

[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=3

uri=rtsp://username:password@ipaddress:8888


num-sources=1
camera-id=3
gpu-id=0
nvbuf-memory-type=0
# smart record specific fields, valid only for source type=4
# 0 = disable, 1 = through cloud events, 2 = through cloud + local events
smart-record=2
# 0 = mp4, 1 = mkv
#smart-rec-container=0
smart-rec-start-time=1
smart-rec-file-prefix=smart_record
#smart-rec-dir-path=/home/monika/record
# cache size in seconds
smart-rec-cache=10

[source2]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://username:password@ipaddress:8888

camera-id=1
num-sources=1
gpu-id=0
nvbuf-memory-type=0

[source3]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://username:password@ipaddress:8888

camera-id=1
num-sources=1
gpu-id=0
nvbuf-memory-type=0

[sink0]
enable=1
type=2
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
#iframeinterval=10
bitrate=100000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
output-file=resnet.mp4
source-id=0

[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvoverlaysink 6=MsgConvBroker
type=6
msg-conv-config=msgconv_sample_config.txt
# Name of library having custom implementation.
# msg-conv-msg2p-lib=/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-occupancy-analytics/bin/jetson/libnvds_msgconv.so
msg-conv-msg2p-lib=/opt/nvidia/deepstream/deepstream-7.0/sources/apps/sample_apps/occupancy_analytics/bin/x86/libnvds_msgconv.so
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str=kafka_container;9092;quickstart-events
#topic=<topic>
#Optional:
#msg-broker-config=../../deepstream-test4/cfg_kafka.txt

[sink2]
enable=1
type=1
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
## only SW mpeg4 is supported right now.
codec=3
sync=1
bitrate=2000000
output-file=out.mp4
source-id=2

# sink type = 6 by default creates msg converter + broker.
# To use multiple brokers use this group for converter and use
# sink type = 6 with disable-msgconv = 1


[sink3]
enable=1
type=1
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
## only SW mpeg4 is supported right now.
codec=3
sync=1
bitrate=2000000
output-file=out.mp4
source-id=3

# sink type = 6 by default creates msg converter + broker.
# To use multiple brokers use this group for converter and use
# sink type = 6 with disable-msgconv = 1
[message-converter]
enable=0
msg-conv-config=msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
# Id of component in case only selected message to parse.
#msg-conv-comp-id=<val>

# Configure this group to enable cloud message consumer.... [message-converter]
enable=0
msg-conv-config=msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
# Id of component in case only selected message to parse.
#msg-conv-comp-id=<val>

# Configure this group to enable cloud message consumer.
[message-consumer0]
enable=0
proto-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so
conn-str=localhost;9092
#config-file=<broker config file e.g. cfg_kafka.txt>
subscribe-topic-list=quickstart-events
# Use this option if message has sensor name as id instead of index (0,1,2 etc.).
sensor-list-file=msgconv_sample_config.txt

[osd]
enable=1
gpu-id=0
border-width=1
text-size=10
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=2
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1

[primary-gie]
enable=1
gpu-id=0
batch-size=2
## 0=FP32, 1=INT8, 2=FP16 mode
bbox-border-color0=1;0;0;1
#bbox-border-color1=0;1;1;1
#bbox-border-color2=0;1;1;1
#bbox-border-color3=0;1;0;1
nvbuf-memory-type=0
interval=0
config-file=pgie_peoplenet_tao_config.txt
#infer-raw-output-dir=../../../../../samples/primary_detector_raw_output/

[tracker]
enable=1
tracker-width=640
tracker-height=384
gpu-id=0
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
# ll-config-file required to set different tracker types
# ll-config-file=../../../../samples/configs/deepstream-app/config_tracker_IOU.yml
# ll-config-file=../../../../samples/configs/deepstream-app/config_tracker_NvSORT.yml
ll-config-file=../../../../samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml
# ll-config-file=../../../../samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml
# ll-config-file=../../../../samples/configs/deepstream-app/config_tracker_NvDeepSORT.yml
# enable-batch-process=0

[nvds-analytics]
enable=1
config-file=nvdsanalytics_config.txt

[tests]
file-loop=0

Which version of DeepStream are you using? You need to integrate the runtime_source_add_delete code into your project yourself.
There is also a simple way to fulfill your needs. You can try to use our nvmultiurisrcbin and use cli comamnd to add or remove the sources.

using deepstream:7.0-triton-multiarch

The issue i was facing was connecting to the running DeepStream pipeline .
I’ll refer to the link shared .

How does Gst-nvmultiurisrcbin — DeepStream documentation connect to the current running pipeline . In my case the occupancy DeepStream instance.

You can try to learn how to use that by referring to our deepstream-server sample. Then replace the relevant part of your code with nvmultiurisrcbin.

Sure , Thanks .

Logs

0:00:39.970381500 10455 0x558cb8df2870 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2141> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-7.0/sources/apps/sample_apps/occupancy_analytics/config/peoplenet/resnet34_peoplenet_pruned_int8.etlt_b10_gpu0_int8.engine successfully
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:612 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x544x960       
1   OUTPUT kFLOAT output_bbox/BiasAdd 12x34x60        
2   OUTPUT kFLOAT output_cov/Sigmoid 3x34x60         

0:00:40.067740155 10455 0x558cb8df2870 INFO                 nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-7.0/sources/apps/sample_apps/occupancy_analytics/config/pgie_peoplenet_tao_config.txt sucessfully

Runtime commands:
	h: Print this help
	q: Quit

	p: Pause
	r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.


**PERF:  FPS 0 (Avg)	FPS 1 (Avg)	FPS 2 (Avg)	FPS 3 (Avg)	FPS 4 (Avg)	FPS 5 (Avg)	FPS 6 (Avg)	FPS 7 (Avg)	FPS 8 (Avg)	FPS 9 (Avg)	
Tue Mar 11 13:09:13 2025
**PERF:  0.00 (0.00)	0.00 (0.00)	0.00 (0.00)	0.00 (0.00)	0.00 (0.00)	2116236018261888334182616439826041235447619007598651022277241010286063634750327980726130878623790430377993936176051912704.00 (0.00)	6734288548868692098131428585345149376123353053553449737476383575375292044826038613355889383753004482624176295478214824791576174756987050854225722344755645551357207084498156893195682112275002918928746712181735045004437289118088518878944891192704439073166844296719313605742139055640416354304.00 (0.00)	137661949073550437829176746277102823019286169367007651194840318100979551882870900589397575099341120626445549582472073159397225629403146712713747885907355933622861461768284378497024.00 (0.00)	-2841725111111832865125634670592.00 (0.00)	0.00 (0.00)	
new stream added [0:UniqueSensorId1:UniqueSensorName1]



new stream added [1:UniqueSensorId2:UniqueSensorName2]



new stream added [2:UniqueSensorId3:UniqueSensorName3]



** INFO: <bus_callback:291>: Pipeline ready

** INFO: <bus_callback:277>: Pipeline running

0:00:40.110819664 10455 0x7849ec0035c0 WARN                  udpsrc gstudpsrc.c:1637:gst_udpsrc_open:<udpsrc1> warning: Could not create a buffer of requested 524288 bytes (Operation not permitted). Need net.admin privilege?
0:00:40.110822871 10455 0x7849ec0010a0 WARN                  udpsrc gstudpsrc.c:1637:gst_udpsrc_open:<udpsrc0> warning: Could not create a buffer of requested 524288 bytes (Operation not permitted). Need net.admin privilege?
0:00:40.110839562 10455 0x7849ec0035c0 WARN                  udpsrc gstudpsrc.c:1647:gst_udpsrc_open:<udpsrc1> have udp buffer of 212992 bytes while 524288 were requested
0:00:40.110843924 10455 0x7849ec0010a0 WARN                  udpsrc gstudpsrc.c:1647:gst_udpsrc_open:<udpsrc0> have udp buffer of 212992 bytes while 524288 were requested
0:00:40.110859162 10455 0x7849ec003280 WARN                  udpsrc gstudpsrc.c:1637:gst_udpsrc_open:<udpsrc2> warning: Could not create a buffer of requested 524288 bytes (Operation not permitted). Need net.admin privilege?
0:00:40.110875878 10455 0x7849ec003280 WARN                  udpsrc gstudpsrc.c:1647:gst_udpsrc_open:<udpsrc2> have udp buffer of 212992 bytes while 524288 were requested
0:00:40.110952569 10455 0x7849ec0010a0 WARN                  udpsrc gstudpsrc.c:1637:gst_udpsrc_open:<udpsrc4> warning: Could not create a buffer of requested 524288 bytes (Operation not permitted). Need net.admin privilege?
0:00:40.110963584 10455 0x7849ec0010a0 WARN                  udpsrc gstudpsrc.c:1647:gst_udpsrc_open:<udpsrc4> have udp buffer of 212992 bytes while 524288 were requested
0:00:40.110977422 10455 0x7849ec003280 WARN                  udpsrc gstudpsrc.c:1637:gst_udpsrc_open:<udpsrc5> warning: Could not create a buffer of requested 524288 bytes (Operation not permitted). Need net.admin privilege?
0:00:40.110988002 10455 0x7849ec003280 WARN                  udpsrc gstudpsrc.c:1647:gst_udpsrc_open:<udpsrc5> have udp buffer of 212992 bytes while 524288 were requested
0:00:41.114462796 10455 0x7849ec003900 FIXME                default gstutils.c:4025:gst_pad_create_stream_id_internal:<fakesrc0:src> Creating random stream-id, consider implementing a deterministic way of creating a stream-id
0:00:41.114672567 10455 0x7849ec003c40 FIXME                default gstutils.c:4025:gst_pad_create_stream_id_internal:<fakesrc1:src> Creating random stream-id, consider implementing a deterministic way of creating a stream-id
0:00:41.114798845 10455 0x7849ec003f80 FIXME                default gstutils.c:4025:gst_pad_create_stream_id_internal:<fakesrc2:src> Creating random stream-id, consider implementing a deterministic way of creating a stream-id
0:00:41.127815994 10455 0x7849ec005640 FIXME           rtph265depay gstrtph265depay.c:1287:gst_rtp_h265_depay_process:<depay> Assuming DONL field is not present

Unable to get the output window .

Config

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=2
columns=2
width=1280
height=720
gpu-id=0

#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
[source-list]
num-source-bins=3
list=rtsp://example-rtsp-1;rtsp://example-rtsp-2;rtsp://example-rtsp-3
sensor-id-list=UniqueSensorId1;UniqueSensorId2;UniqueSensorId3
# Optional sensor-name-list if needed
sensor-name-list=UniqueSensorName1;UniqueSensorName2;UniqueSensorName3
use-nvmultiurisrcbin=1
max-batch-size=10
http-ip=localhost
http-port=9000
sgie-batch-size=40

[source-attr-all]
enable=0
type=4
num-sources=3
gpu-id=0
cudadec-memtype=0
latency=100
rtsp-reconnect-interval-sec=10


[primary-gie]
enable=1
gpu-id=0
batch-size=2
## 0=FP32, 1=INT8, 2=FP16 mode
bbox-border-color0=1;0;0;1
#bbox-border-color1=0;1;1;1
#bbox-border-color2=0;1;1;1
#bbox-border-color3=0;1;0;1
nvbuf-memory-type=0
interval=0
config-file=pgie_peoplenet_tao_config.txt
#infer-raw-output-dir=../../../../../samples/primary_detector_raw_output/

[sink0]
enable=1
type=1  # Use type=1 for EGL sink
sync=0  # Set to 0 for smoother display
window-width=1280  # Window width
window-height=720  # Window height


[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvoverlaysink 6=MsgConvBroker
type=6
msg-conv-config=msgconv_sample_config.txt
# Name of library having custom implementation.
# msg-conv-msg2p-lib=/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-occupancy-analytics/bin/jetson/libnvds_msgconv.so
msg-conv-msg2p-lib=/opt/nvidia/deepstream/deepstream-7.0/sources/apps/sample_apps/occupancy_analytics/bin/x86/libnvds_msgconv.so
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str=kafka_container;9092;raw-events
#topic=<topic>
#Optional:
#msg-broker-config=../../deepstream-test4/cfg_kafka.txt

[sink2]
enable=1
type=1  # Use type=1 for EGL sink
sync=0  # Set to 0 for smoother display
window-width=1280  # Window width
window-height=720  # Window height


# sink type = 6 by default creates msg converter + broker.
# To use multiple brokers use this group for converter and use
# sink type = 6 with disable-msgconv = 1



# sink type = 6 by default creates msg converter + broker.
# To use multiple brokers use this group for converter and use
# sink type = 6 with disable-msgconv = 1
[message-converter]
enable=0
msg-conv-config=msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
# Id of component in case only selected message to parse.
#msg-conv-comp-id=<val>

# Configure this group to enable cloud message consumer.


[message-converter]
enable=0
msg-conv-config=msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
# Id of component in case only selected message to parse.
#msg-conv-comp-id=<val>

# Configure this group to enable cloud message consumer.
[message-consumer0]
enable=0
proto-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so
conn-str=localhost;9092
#config-file=<broker config file e.g. cfg_kafka.txt>
subscribe-topic-list=raw-events
# Use this option if message has sensor name as id instead of index (0,1,2 etc.).
sensor-list-file=msgconv_sample_config.txt

[osd]
enable=1
gpu-id=0
border-width=1
text-size=10
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=2
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1


[tracker]
enable=1
tracker-width=640
tracker-height=384
gpu-id=0
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
# ll-config-file required to set different tracker types
# ll-config-file=../../../../samples/configs/deepstream-app/config_tracker_IOU.yml
# ll-config-file=../../../../samples/configs/deepstream-app/config_tracker_NvSORT.yml
ll-config-file=../../../../samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml
# ll-config-file=../../../../samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml
# ll-config-file=../../../../samples/configs/deepstream-app/config_tracker_NvDeepSORT.yml
# enable-batch-process=0

[nvds-analytics]
enable=1
config-file=nvdsanalytics_config.txt

[tests]
file-loop=0

new warning

0:14:48.922569107 10629 0x63184eb2e8b0 ERROR            nvmsgbroker gstnvmsgbroker.cpp:102:nvds_msgapi_send_callback:<sink_sub_bin_sink2> error(1) in sending data
0:14:48.922583655 10629 0x63184eb2e8b0 ERROR            nvmsgbroker gstnvmsgbroker.cpp:102:nvds_msgapi_send_callback:<sink_sub_bin_sink2> error(1) in sending data
0:14:48.922589314 10629 0x63184eb2e8b0 ERROR            nvmsgbroker gstnvmsgbroker.cpp:102:nvds_msgapi_send_callback:<sink_sub_bin_sink2> error(1) in sending data
0:14:48.922594319 10629 0x63184eb2e8b0 ERROR            nvmsgbroker gstnvmsgbroker.cpp:102:nvds_msgapi_send_callback:<sink_sub_bin_sink2> error(1) in sending data
0:14:48.922598127 10629 0x63184eb2e8b0 ERROR            nvmsgbroker gstnvmsgbroker.cpp:102:nvds_msgapi_send_callback:<sink_sub_bin_sink2> error(1) in sending data
0:14:48.922602042 10629 0x63184eb2e8b0 ERROR            nvmsgbroker gstnvmsgbroker.cpp:102:nvds_msgapi_send_callback:<sink_sub_bin_sink2> error(1) in sending data
0:14:48.922605780 10629 0x63184eb2e8b0 ERROR            nvmsgbroker gstnvmsgbroker.cpp:102:nvds_msgapi_send_callback:<sink_sub_bin_sink2> error(1) in sending data
0:14:49.923036448 10629 0x63184eb2e8b0 ERROR            nvmsgbroker gstnvmsgbroker.cpp:102:nvds_msgapi_send_callback:<sink_sub_bin_sink2> error(1) in sending data
0:14:49.923052149 10629 0x63184eb2e8b0 ERROR            nvmsgbroker gstnvmsgbroker.cpp:102:nvds_msgapi_send_callback:<sink_sub_bin_sink2> error(1) in sending data
0:14:49.923056835 10629 0x63184eb2e8b0 ERROR            nvmsgbroker gstnvmsgbroker.cpp:102:nvds_msgapi_send_callback:<sink_sub_bin_sink2> error(1) in sending data
0:14:49.923060875 10629 0x63184eb2e8b0 ERROR            nvmsgbroker gstnvmsgbroker.cpp:102:nvds_msgapi_send_callback:<sink_sub_bin_sink2> error(1) in sending data

It is possible that your nvmsgconv and nvmsgbroker environments are not configured properly. Could you try to disable the sink1 first?

The error was solved by compiling some plugins . The app is running but still not receiving an output window with the streams.

INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:612 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x544x960       
1   OUTPUT kFLOAT output_bbox/BiasAdd 12x34x60        
2   OUTPUT kFLOAT output_cov/Sigmoid 3x34x60         

0:00:39.277639571 12255 0x58ac6cc51730 INFO                 nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-7.0/sources/apps/sample_apps/occupancy_analytics/config/pgie_peoplenet_tao_config.txt sucessfully

Runtime commands:
	h: Print this help
	q: Quit

	p: Pause
	r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.


**PERF:  FPS 0 (Avg)	FPS 1 (Avg)	FPS 2 (Avg)	FPS 3 (Avg)	FPS 4 (Avg)	FPS 5 (Avg)	FPS 6 (Avg)	FPS 7 (Avg)	FPS 8 (Avg)	FPS 9 (Avg)	
Wed Mar 12 09:25:29 2025
**PERF:  0.00 (0.00)	0.00 (0.00)	0.00 (0.00)	0.00 (0.00)	0.00 (0.00)	2116236018261888334182616439826041235447619007598651022277241010286063634750327980726130878623790430377993936176051912704.00 (0.00)	6734288548868692098131428585345149376123353053553449737476383575375292044826038613355889383753004482624176295478214824791576174756987050854225722344755645551357207084498156893195682112275002918928746712181735045004437289118088518878944891192704439073166844296719313605742139055640416354304.00 (0.00)	137661949073550437829176746277102823019286169367007651194840318100979551882870900589397575099341120626445549582472073159397225629403146712713747885907355933622861461768284378497024.00 (0.00)	-2841725111111832865125634670592.00 (0.00)	0.00 (0.00)	
new stream added [0:UniqueSensorId1:UniqueSensorName1]



new stream added [1:UniqueSensorId2:UniqueSensorName2]



new stream added [2:UniqueSensorId3:UniqueSensorName3]



** INFO: <bus_callback:291>: Pipeline ready

** INFO: <bus_callback:277>: Pipeline running

mimetype is video/x-raw

I was able to get the out put by setting sink type to 2 .

[sink0]
type=2