When possible, please provide the following info:
I am using Jetson nano platform for learning deepstream. Currently I have configure one primary model to detect person and one secondary model to detect face. Now I want to pass the detected face to facenet model of google. I have converted that model to uff already and its working great. But the probem is if I configure facenet.uff as 2nd secondary model than how can I get the 128d points as output from it and pass it to the 3rd secondary model that classify the person using the points.
Please help me. • Hardware Platform (Jetson Nano) • DeepStream Version–> 4.0 • JetPack Version (4.2) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only)
Option 1. You need to customize nvinfer plugin and customize postprocess parser, refer Face detection with deepstream with landmarks
Option 2. You can enable Enable tensor metadata output and add a probe in you 2nd nvinfer to parse the output tensor from TensorRT then add the your construct your user meta including 128d points , refer deepstream_infer_tensor_meta_test.cpp.
One more thing to ask I have set secondary parser that detect face. Now i have trained image classifier on face dataset. Now when I added second secondary classifier to the code.It runs without error but it didn’t show me up the name of the classified person. i know it gives false result but it’s just for learning purpose.
I have integrate Facenet model with Peoplenet TLT model using deepstream-test2 Python sample app.
I used Peoplenet as primary detector and Facenet as secondary inference and removed the other two classifiers.
The app is running fine with no errors. But I wouldn’t able to parse the output tensor for Facenet. All the time l_user = obj_meta.obj_user_meta_list is None. I have enabled output tensor meta in the config file output-tensor-meta=1.
The prob I have added it on nvvidconv sink pad.
I didn’t change anything of the tracking config
This is the configuration file I’m using for Facenet.
I have tried to set process-mode=1 to act as primary and classify whole frame. just to make sure Facenet is working. It worked and I was able to read the tensor output from l_user = frame_meta.frame_user_meta_list and get the output layer Bottleneck_BatchNorm/batchnorm_1/add_1:0. But not able to dot as secondary mode