Parallel Execution of Multiple Detection Models in DeepStream for Pose and YOLO Detection

Please provide complete information as applicable to your setup.

• Hardware Platform RTX 3070
• DeepStream Version Deepstream-6.2
• TensorRT Version 8.6
• Issue Type: questions

I would like to inquire if DeepStream supports model parallelism. In my current project, I have a pose estimation model and a YOLO detection model. My understanding is that DeepStream’s secondary gie processes the bounding boxes generated by the primary gie. Is it possible to infer the secondary gie to process the entire frame, even after it has completed infer primary gie. Is such a configuration feasible in DeepStream?

Plesae refer to NVIDIA-AI-IOT/deepstream_parallel_inference_app: A project demonstrating how to use nvmetamux to run multiple models in parallel. (github.com)

1 Like

You can configure two PGIEs in the pipeline.

1 Like

Can I build a pipline with YOLOV4 as pgie and yolov8-pose as sgie. Then I set the sgie parameter process-mode =1, then the yolov8-pose detect the entire frame?

When you set the “process-mode =1”, it is a PGIE. You need to construct the pipeline by code by yourself. “deepstream-app" sample does not support multiple PGIEs.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.