Deepstream parallel pipeline inference

Hi
Currently, we are running 16 cameras on an AGX Orin. Out of these, 4 cameras require face detection, and 6 cameras require LPR detection. We are using three models: the primary model is YOLO v8, the secondary model is a face detection model, and the third is an LPR model. However, all camera feeds are being processed through all three models, which is consuming more RAM and reducing FPS.
Is there a way to optimize the pipeline so that the face detection model is applied only to the 4 specific camera feeds and the LPR model is applied only to the 6 specific camera feeds? This would help in saving RAM and improving performance.

Could you try our deepstream_parallel_inference_app for your scenario?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.