How to load resnet (google facenet) model to deepstream and pass the output to the other trained model

When possible, please provide the following info:
Hi There,
I am using Jetson nano platform for learning deepstream. Currently I have configure one primary model to detect person and one secondary model to detect face. Now I want to pass the detected face to facenet model of google. I have converted that model to uff already and its working great. But the probem is if I configure facenet.uff as 2nd secondary model than how can I get the 128d points as output from it and pass it to the 3rd secondary model that classify the person using the points.

Please help me.
• Hardware Platform (Jetson Nano)
• DeepStream Version–> 4.0
• JetPack Version (4.2)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)

Option 1. You need to customize nvinfer plugin and customize postprocess parser, refer Face detection with deepstream with landmarks
Option 2. You can enable Enable tensor metadata output and add a probe in you 2nd nvinfer to parse the output tensor from TensorRT then add the your construct your user meta including 128d points , refer deepstream_infer_tensor_meta_test.cpp.

One more thing to ask I have set secondary parser that detect face. Now i have trained image classifier on face dataset. Now when I added second secondary classifier to the code.It runs without error but it didn’t show me up the name of the classified person. i know it gives false result but it’s just for learning purpose.

If i choose option-1 then How can I change the nvinfer plugin for only second secondary model and customize postprocess parser for only second secondary.

Where I can find this file ?

DS_ROOT_PATH is /opt/nvidia/deepstream/deepstream-5.0/ if you are using DS 5.0

Hi dhyey.bhanvadiya36,

What model did you use to detect the face for the secondary model?
I am working on a similar problem.


Hi… This would be nice in python…

Hi @bcao,
I have integrate Facenet model with Peoplenet TLT model using deepstream-test2 Python sample app.
I used Peoplenet as primary detector and Facenet as secondary inference and removed the other two classifiers.
The app is running fine with no errors. But I wouldn’t able to parse the output tensor for Facenet. All the time l_user = obj_meta.obj_user_meta_list is None. I have enabled output tensor meta in the config file output-tensor-meta=1.
The prob I have added it on nvvidconv sink pad.
I didn’t change anything of the tracking config

This is the configuration file I’m using for Facenet.


# 0=FP32 and 1=INT8 mode
## 0=Detector, 1=Classifier, 2=Segmentation, 100=Other
# Enable tensor metadata output

I have tried to set process-mode=1 to act as primary and classify whole frame. just to make sure Facenet is working. It worked and I was able to read the tensor output from l_user = frame_meta.frame_user_meta_list and get the output layer Bottleneck_BatchNorm/batchnorm_1/add_1:0. But not able to dot as secondary mode

Hi anasmk,

Please help to open a new topic for your issue. Thanks

1 Like

Were you able to solve this? I have the same problem, please share your solution

1 Like