Please provide complete information as applicable to your setup.
• Hardware Platform RTX 3070
• DeepStream Version Deepstream-6.2
• TensorRT Version 8.6
• Issue Type: questions
I would like to inquire if DeepStream supports model parallelism. In my current project, I have a pose estimation model and a YOLO detection model. My understanding is that DeepStream’s secondary gie processes the bounding boxes generated by the primary gie. Is it possible to infer the secondary gie to process the entire frame, even after it has completed infer primary gie. Is such a configuration feasible in DeepStream?