My system has the following specs:
GPU: GTX 1660ti
RAM: 32 GB
OS: Windows 11
Environment: Ubuntu 24.04.1 LTS (Through WSL)
Deepstream Version: 7.1
Nvidia-Driver Version: 566.03 | CUDA Version: 12.7
I’d like to know what is the best approach for performing facial recognition within deepstream using dGPU.
I already have my PGIE as yolov7 face detector and my SGIE is set up as insightface.
I need to now extract embeddings and match them to existing ones from a suitable vector store.
For reference, I am using the test5 application, as I need to publish my results and face crops to kafka.
Are there any existing references or sample applications that help me take care of this approach?
What would be the best way to implement this in the test5 application? At which point in the test5 application will I be able to get the embeddings to match them in my vector store?
Are there any existing compatible models (etlt files) that I could make use of directly for this task?
Do you mean you want the samples for sending extract embeddings and cropped images to kafka?
If so, there is cropped object image message broker sending sample in /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test4. Please read item 3 of part “/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test4” in /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test4/README for how to run the sample.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks