Multiple RTSP outputs in DeepStream parallel inference app

I’m using DeepStream parallel inference C++ application to run multiple models on multiple RTSP streams. Now I’d like to also output multiple RTSP streams which will be restreamed to WebRTC using go2rtc and displayed in web application. I’m beginner though with DeepStream so this approach might not be optimal so I’m open to other approaches to display live streams with detections/boxes/etc. in web app.

I’ve tried following things which didn’t work:

  • Removing tiled display (this was suggested in other thread which worked with the deepstream reference app) - program ends almost instantly because whole linking functionality is dependent on it (if (config->tiled_display_config.enable))
  • Adding source IDs manually to sources when parsing configuration YAML file (config->multi_source_config[source_id].source_id = source_id; in parse_config_file_yaml function) and adding source-id property to sink in configuration YAML. Sink with source ID 0 worked but that outputted full tiled display. Anything other than 0 doesn’t work.

Thanks for any advice.

• Hardware Platform (Jetson / GPU) - GPU
• DeepStream Version - 7.1
• TensorRT Version - 10.3.0.26-1+cuda12.5
• NVIDIA GPU Driver Version (valid for GPU only) - 550.127.08
• Issue Type( questions, new requirements, bugs) - questions

tiler plugin is used to composite a batch of frames to one frame. As the code shown, the app only covers the case of tiled_display_config.enable=true.
If you want multiple rtsp outputs. you only need to add multiple sinks with type=4. please refer to \opt\nvidia\deepstream\deepstream\samples\configs\deepstream-app\source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt for how to set rtsp out.

I’ve tried adding multiple RTSP (type 4) sinks in my config (using this config currently with RTSP sink enabled https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/blob/master/deepstream_parallel_inference_app/tritonclient/sample/configs/apps/bodypose_yolo_lpr/source4_1080p_dec_parallel_infer.yml) but that works only for the tiled display stream. I’d like to have multiple RTSP streams:

  • one for tiled display which will be used as preview
  • n streams with detections visualized for n inputs which will be used for detailed view

deepstream-parallel-inference is opensource. you can modify the code to try the following pipeline.
…tee name=t->tiler->nvdsosd->nveglglessink
t.-> nvstreamdemux->nveglglessink(show source0)
t. ->nvstreamdemux->nveglglessink(show source1)
t. ->…
please refer to the following pipeline.

gst-launch-1.0 filesrc  location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 ! h264parse ! nvv4l2decoder ! mux.sink_0  \ filesrc  location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 ! h264parse ! nvv4l2decoder ! mux.sink_1 \
  nvstreammux name=mux batch-size=2 width=1920 height=1080 nvbuf-memory-type=3 ! tee name=t  ! queue !  nvmultistreamtiler !  nveglglessink \
  t. ! queue ! nvstreamdemux name=demux   demux.src_0 ! queue ! nveglglessink demux.src_1 ! queue ! nveglglessink