Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU): TITAN RTX
• DeepStream Version: 5.1
• TensorRT Version:7.2.1
• NVIDIA GPU Driver Version (valid for GPU only):460
• Issue Type( questions, new requirements, bugs):questions
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description): Tee and queue plugins
I want to do the parallel processing of two detection models in a single code in python API. I read about the tee and queue plugins which are used to distribute/split the pipeline. Is it possible to split the pipeline right after streammux and then link each model to the queues and build the remaining pipeline after that?
For example:
streammux.link(tee)
queue1.(pgie)
queue2.(sgie) and so on…