Constructing multistream pipeline

• Hardware Platform (Jetson / GPU) NVIDIA GeForce RTX 3090
• DeepStream Version 6.3
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.4.0
• NVIDIA GPU Driver Version (valid for GPU only) 535.113.01
• Issue Type(questions, new requirements, bugs) questions

Hello,

How can I build a multistream pipeline with gst-launch command? Something like this one:

gst-launch-1.0 \
uridecodebin3 uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! queue ! \
nvstreammux0.sink_0 nvstreammux name=nvstreammux0 batch-size=1 batched-push-timeout=40000 width=1920 height=1080 live-source=TRUE ! queue ! \
nvvideoconvert ! queue ! \
nvinfer name=nvinfer1 config-file-path="/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test2/dstest2_pgie_config.txt" ! queue ! \
nvtracker tracker-width=240 tracker-height=200 ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so ll-config-file="/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test2/dstest2_tracker_config.txt" ! queue ! \
nvinfer name=nvinfer2 process-mode=secondary infer-on-gie-id=1 infer-on-class-ids="0:" batch-size=16 config-file-path="/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test2/dstest2_sgie1_config.txt" ! queue ! \
nvinfer name=nvinfer3 process-mode=secondary infer-on-gie-id=1 infer-on-class-ids="0:" batch-size=16 config-file-path="/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test2/dstest2_sgie2_config.txt" ! queue ! \
nvinfer name=nvinfer4 process-mode=secondary infer-on-gie-id=1 infer-on-class-ids="0:" batch-size=16 config-file-path="/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test2/dstest2_sgie3_config.txt" ! queue ! \
fakesink name=fakesink0 sync=false

I have 3 sources, 1 pgie, tracker, and 2 sgies. I am trying to calculate the interlatency so I am trying to isolate the pipeline from the python application I already have.

Thanks.

You may refer to the command in " Preprocess in PGIE mode for Multi-stream" in this page: Gst-nvdspreprocess (Alpha) — DeepStream 6.3 Release documentation (nvidia.com)

Another option,you can use nvmultiurisrcbin to process multistream.

Such as the following cli.

gst-launch-1.0 \
nvmultiurisrcbin  width=1920 height=1080 uri-list="file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_walk.mov;file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_ride_bike.mov;file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_run.mov;file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_push.mov" ! queue ! \
nvstreammux0.sink_0 nvstreammux name=nvstreammux0 batch-size=1 batched-push-timeout=40000 width=1920 height=1080 live-source=TRUE ! queue ! \
nvvideoconvert ! queue ! \
nvinfer name=nvinfer1 config-file-path="/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test2/dstest2_pgie_config.txt" ! queue ! \
nvtracker tracker-width=240 tracker-height=200 ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so ll-config-file="/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test2/dstest2_tracker_config.txt" ! queue ! \
nvinfer name=nvinfer2 process-mode=secondary infer-on-gie-id=1 infer-on-class-ids="0:" batch-size=16 config-file-path="/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test2/dstest2_sgie1_config.txt" ! queue ! \
nvinfer name=nvinfer3 process-mode=secondary infer-on-gie-id=1 infer-on-class-ids="0:" batch-size=16 config-file-path="/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test2/dstest2_sgie2_config.txt" ! queue ! \
nvinfer name=nvinfer4 process-mode=secondary infer-on-gie-id=1 infer-on-class-ids="0:" batch-size=16 config-file-path="/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test2/dstest2_sgie3_config.txt" ! queue ! \
fakesink name=fakesink0 sync=false

That’s awesome!!

Thank you so much!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.