Splitting the pipeline for different models

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): TITAN RTX
• DeepStream Version: 5.1
• TensorRT Version:7.2.1
• NVIDIA GPU Driver Version (valid for GPU only):460
• Issue Type( questions, new requirements, bugs):questions
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description): Tee and queue plugins

I want to do the parallel processing of two detection models in a single code in python API. I read about the tee and queue plugins which are used to distribute/split the pipeline. Is it possible to split the pipeline right after streammux and then link each model to the queues and build the remaining pipeline after that?

For example:
queue2.(sgie) and so on…

Yeah, it’s possible to do that.
You can also refer Single source and multi inference models in parallel - #7 by chauncey.wang