I need to build a face recognition app using Deepstream 5.0. So I basically need a face detector(mtcnn model) and a feature extractor.(keras FaceNet model). I have caffe and prototxt files for all the three models of mtcnn. Also, I have uff file for facenet model. I want to know how to integrate this using Deepstream.
There is also a face detection model that I found facenet-120. I am also willing to use this model instead of using mtcnn solution for detecting a face. Also suggest how to use this with the facenet model.
My goal is to output 128D vector for one face from the facenet model.
Hello!
I don’t think I will need detectnet.prototxt file. I got all the models that I need for facenet-120.
It is in the following directory: jetson-inference/build/aarch64/bin/networks/facenet-120
If I use facenet-120 as the primary model, there will be no secondary model required since I can directly access the embeddings as @AastaLLL suggested :
@AastaLLL It will be really helpful if you could show in code how can I obtain the 128D vector as the output using deepstream-infer-tensor-meta-test. Right now I am just interested in obtaining the vector itself.
The Gst-nvinfer plugin can attach raw output tensor data generated by a TensorRT inference engine as metadata. It is added as an NvDsInferTensorMeta in the frame_user_meta_list member of NvDsFrameMeta for primary (full frame) mode, or in the obj_user_meta_list member of NvDsObjectMeta for secondary (object) mode.
…
I am using facenet-120 model as my pgie network here. Now here problem is that the output node inception_5b/output is present in the prototxt file deploy.prototxt but it is not present in the engine.
I am unable to fetch this tensor as it is a pointer i cannot print it’s value. how can I print its value or convert it into a numpy array of shape 128 in c++?