How to Process Multiple Sources Simultaneously in a DeepStream Pipeline

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): Tesla T4
• DeepStream Version: 7.0
• JetPack Version (valid for Jetson only)
• TensorRT Version: 8.6.1.6-1+cuda12.0
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs): question

I’m currently working with a DeepStream pipeline that processes 8-second videos, providing processed output video and metadata for a given video. However, the current setup only allows for the processing of a single video at a time.

I have added the pipeline diagram for reference. How can I modify the pipeline to enable concurrent processing of multiple videos? Any insights or examples would be greatly appreciated! Thank you!

The deepstream_test3_app is a sample which support input multiple videos

/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test3/deepstream_test3_app.c

uridecodebin |
             |
uridecodebin |  --> nvstreammux --> nvinfer .....
             |
uridecodebin |
.......

Thanks for the reply!

How can I extract the metadata for each video separately? Also, how can I save the processed videos separately?
Any insights or examples would be greatly appreciated!

  1. In the frame_meta_list, each entry corresponds to a video
NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf);

    for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
      l_frame = l_frame->next) {
        NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) (l_frame->data);

2.Use nvstreamdemux

                              | --> video0
nvinfer --> nvstreamdemux --> | --> video1
                              | --> video2

@junshengy I use tee for seperate 2 branch for 2 model like this

how extract the metadata for each video for each model . The buffers share data is same. Can you have any solution ?

@user40864 Sorry for the long delay, Please open a new topic for your issue.GstBuffer after nvstreammux contain a batch(GPU buffer), but tee element can’t copy it for every branch.
You can refer to GitHub - NVIDIA-AI-IOT/deepstream_parallel_inference_app: A project demonstrating how to use nvmetamux to run multiple models in parallel.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.