Deepstream for face recognition

I need to build a face recognition app using Deepstream 5.0. So I basically need a face detector(mtcnn model) and a feature extractor.(keras FaceNet model). I have caffe and prototxt files for all the three models of mtcnn. Also, I have uff file for facenet model. I want to know how to integrate this using Deepstream.
There is also a face detection model that I found facenet-120. I am also willing to use this model instead of using mtcnn solution for detecting a face. Also suggest how to use this with the facenet model.
My goal is to output 128D vector for one face from the facenet model.

Thanks!!

Hi,

You can set the output TensorRT blob to the feature layer directly:

[property]
...
output-blob-names=inception_5b/output
...

Then, you can access it through NvDsInferTensorMeta.

/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-infer-tensor-meta-test

Thanks.

1 Like

Hello!

Could you please give detailed step wise instructions on how to go with it, I am relatively new in TensorRT, Deepstream.

Thanks!

Hello!
I don’t think I will need detectnet.prototxt file. I got all the models that I need for facenet-120.

It is in the following directory:
jetson-inference/build/aarch64/bin/networks/facenet-120

If I use facenet-120 as the primary model, there will be no secondary model required since I can directly access the embeddings as @AastaLLL suggested :

[property] 
...
output-blob-names=inception_5b/output
...

So now I need to know what to edit in :
/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-infer-tensor-meta-test

It is a ready-made app for one primary model and three secondary models, and I need it to work only for one primary model i.e facenet-120.

Should I use deepstream-test1 instead of deepstream-infer-tensor-meta-test?

Thanks!

Hi,

We still recommend to use the deepstream-infer-tensor-meta-test.
Since it demonstrates how to access the Tensor data from the TensorRT engine.

You can just turn-off the following three secondary-gie for your use case.

Thanks.

1 Like

Hi,

@AastaLLL It will be really helpful if you could show in code how can I obtain the 128D vector as the output using deepstream-infer-tensor-meta-test. Right now I am just interested in obtaining the vector itself.

Thanks.

Hi,

You can find some detail in our document here:
https://docs.nvidia.com/metropolis/deepstream/plugin-manual/index.html#page/DeepStream%20Plugins%20Development%20Guide/deepstream_plugin_details.html#wwpID0E0PDB0HA

Tensor Metadata

The Gst-nvinfer plugin can attach raw output tensor data generated by a TensorRT inference engine as metadata. It is added as an NvDsInferTensorMeta in the frame_user_meta_list member of NvDsFrameMeta for primary (full frame) mode, or in the obj_user_meta_list member of NvDsObjectMeta for secondary (object) mode.

Thanks.

Filename: deepstream_infer_tensor_meta_test.cpp
I wanted to look at the tensor value, So I did cout to meta

Output:

I want to look at the tensor value not the hex code

Thanks.

Hii,

I am using facenet-120 model as my pgie network here. Now here problem is that the output node inception_5b/output is present in the prototxt file deploy.prototxt but it is not present in the engine.

This is how my dstensor_pgie_config.txt config file looks like:

So I am not sure whether to use this model or not.

Thanks.

Hi,

I will need to dereference a void pointer that holds the layer data since I need the raw tensor data.

How to Dereference void pointer buffer into a suitable format?

Also let’s say I dereference it in string, how to save all the strings in a txt file?

Thanks.

Hi,

Thanks for your update.

We are checking this issue with our internal team.
Will share more information with you later.

Hi,

meta is the buffer pointer.
You should be able to output the vale via scanning the buffer meta->out_buf_ptrs_host[i].

Thanks.