I’m using DeepStream parallel inference C++ application to run multiple models on multiple RTSP streams. Now I’d like to also output multiple RTSP streams which will be restreamed to WebRTC using go2rtc and displayed in web application. I’m beginner though with DeepStream so this approach might not be optimal so I’m open to other approaches to display live streams with detections/boxes/etc. in web app.
I’ve tried following things which didn’t work:
Removing tiled display (this was suggested in other thread which worked with the deepstream reference app) - program ends almost instantly because whole linking functionality is dependent on it (if (config->tiled_display_config.enable))
Adding source IDs manually to sources when parsing configuration YAML file (config->multi_source_config[source_id].source_id = source_id; in parse_config_file_yaml function) and adding source-id property to sink in configuration YAML. Sink with source ID 0 worked but that outputted full tiled display. Anything other than 0 doesn’t work.
Thanks for any advice.
• Hardware Platform (Jetson / GPU) - GPU • DeepStream Version - 7.1 • TensorRT Version - 10.3.0.26-1+cuda12.5 • NVIDIA GPU Driver Version (valid for GPU only) - 550.127.08 • Issue Type( questions, new requirements, bugs) - questions
tiler plugin is used to composite a batch of frames to one frame. As the code shown, the app only covers the case of tiled_display_config.enable=true.
If you want multiple rtsp outputs. you only need to add multiple sinks with type=4. please refer to \opt\nvidia\deepstream\deepstream\samples\configs\deepstream-app\source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt for how to set rtsp out.
deepstream-parallel-inference is opensource. you can modify the code to try the following pipeline.
…tee name=t->tiler->nvdsosd->nveglglessink
t.-> nvstreamdemux->nveglglessink(show source0)
t. ->nvstreamdemux->nveglglessink(show source1)
t. ->…
please refer to the following pipeline.