Workload of mutiple sources into multiple sinks gstreamer

Hi,

I have 8 stream sources which I want to add into one source_bin using (GstElement *create_source_bin)
Then I want to use this source_bin data to feed to 3 pipelines (three deferent AI models in parralell) to output three sinks. All is done in a single application.

Is this possible and the workload for this case should be [8 + 3] or [(8 + 3) x 3] threads ?

Many thanks,

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Do you mean you want to know how many threads will be in the app?

Yeah, and I want to make sure to only use 8 streams sources but not 8x3 ?

If you have 3 different models which want to inference the 8 streams, one pipeline is enough.

I want to separate output frames in 3 different output based on result of 3 models. Since multiple output frames was not supported by DS so that’s why I need to create 3 pipeline.

Understand. If you tee the nvstreammux output to the three models. The number will be 8 + 3 +3

Hmm Why 8 + 3 + 3 ? 8 source threads + 3 pipeline threads and …?