Running two primary inference engines in parallel

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.5.1
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I’ve an object detection model as well as a segmentation model. Can I run both the models in parallel for a single video? My plan is to decode the video first, and then use tees and queues to pass the decoded video individually to each model and run it parallelly.

Hi @vino_mm , Could you tell me why you want to run both the models in parallel for a single video? What’s your use case?

@yuweiw so I’m working on a real-time analytics which requires these two models to run in parallel at the same time.

The processing flow between each plug-in is asynchronous. So these two models run in parallel originally even without tee. Like one model is inferring the first frame while the other is inferring the second frame at the same time.

@yuweiw I’ve separate nvstreammux plugin for both the models. After the video decoding, how can I pass the same video to both the nvstreammux plugin without using tee?

If you just want to run two models with prinary mode in parallel at the same time. We don’t suggest you to use tee.As I said, although pipeline drawing is serial like below, the plugin is asynchronous originally.

.....nvstreammux->pgie(detector)->pgie(classifier).....

About how to run two primary inference, you can refer the link below:
https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/tree/master/back-to-back-detectors

@yuweiw both models expects different input size. So if I want to use two nvstreammux plugins then how it will run? Or is there another way without using multiple nvstreammux plugin?

You can just set the diff network size on the nvinfer config files.Please refer the link below:
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html

@yuweiw okay got it. Since I’m using two primary inference models I need to save output video for each of them. For the object detection model I need to save the video from nvosd plugin and for the segmentation model I need to save the video from the nvsegvisual plugin. How to achieve this if we are not branching using tees and queues.

===>Since I’m using two primary inference models I need to save output video for each of them.
We suggest you run separately at present. Also you can use tee plugin to develop it by yourself like below. But it may be complicated. We’ll consider your special requirement in the future version.

                      ->pgie(detect)->.....
........streammux->tee
                      ->pgie(seg)->.....

@yuweiw thank you

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.