How to Avoid Re-processing Already Recognized Faces in DeepStream Python?

In DeepStream Python, I am working on a face recognition task. The current processing pipeline is pgie (object detection) → tracking → sgie1 (face detection) → sgie2 (face recognition). I am facing an issue where the model continuously detects and recognizes faces (sgie1 + sgie2). However, I want to reduce the computational load by ensuring that faces of already recognized object IDs are not processed again. How can I achieve this?

Do you mean that once you identify a person, you don’t need to identify him in the rest of the video?

Yes, that’s what I mean. Once I identify a person, I don’t need to re-identify them in the rest of the video if the tracking ID remains.

We do not currently support this feature. But you can implement that in your app by yourself.

  1. Record all the identified tracker ids
  2. Remove the bbox already recorded in the src_pad of the tracker plugin

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.