Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 6.2
• JetPack Version (valid for Jetson only) 5.1
• TensorRT Version 8.5.2
I’m trying to implement the below pipeline in deep stream
Face detection → Tracking → Recognition
I’m using Facedetect for detection(FaceDetect | NVIDIA NGC) and facenet for recognition. I have tested the detection & tracking parts and both are working well. However when I added face recognition (Facenet model converted to ONNX) to the pipeline by writing over Test-2 sample this generated an error. I have attached the full log of error.
DS_Facedetec+faceneterror (55.0 KB)
I have used .etlt file INT8 (detection), .ONNX FP16 (recognition) and from log I can see that for primary/secondary configs give the expected input/ output dimensions. Facenet (Recognitionmodel) takes 160x160 and gives 512 embedding.
Can you please advise how to solve this error
Seems your facenet postprocessing is wrong.
0:00:56.738614428 19297 0x31ed85e0 ERROR nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger:<secondary-inference face_classifier> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:59> [UID = 1]: Could not find output coverage layer for parsing objects
0:00:56.738656636 19297 0x31ed85e0 ERROR nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger:<secondary-inference face_classifier> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:735> [UID = 1]: Failed to parse bboxes
You configure the facenet as detector. While your facenet model output only one layer.
0 INPUT kFLOAT input 3x160x160
1 OUTPUT kFLOAT output 512
Please configure your model correctly and seems you need to customize the postprocessing of facenet.
Does that mean that there’s ab error in converting the facenet model from Pytorch to ONNX? I have followed tensorrt official guide.
Additionally where to config the facenet as recognition i have wrote over test-2 sample where 2nd configuration is the recognition part
No. When you choose the model, please also understand the input and output of the model. There are thousands of models for different purpose, the TensorRT only do the ineferencing, you need to do the pre-process and post-process.
In my case Facedetect gives two outputs and Facenet gives one output which is the 512 embeddings. I wrote over test-2 to add the classification which compares the generated embeddings with embeddings in a dataset. Can you please support what preprocessing/post processing technique should be used and where to add it in the above pipeline
is there any documentation for the user_meta? I;m trying to understand more about it. I can run the pipeline without errors now but i think detected faces in PGIE are not passed to SGIE because these faces have to be resized to (160,160). How can i add add this pre-processing to the pipeline. I’m using test-2 python app sample
The SGIE preprocess is done inside gst-nvinfer. gst-nvinfer knows the model input dimensions, you just need to tell gst-nvinfer which scaling method do you want by the configuration file. DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums
which property in the config file will do the scaling part?
I have tried to disable input/output tensort-meta to use the defult preprocess but obj_meta.obj_user_meta_list None always none. when enabling
output tensort-meta only obj_meta.obj_user_meta_list passed one face out of all detected faces
I was not able to figure out the issue. I have followed steps in Facenet with DeepStream Python Not Able to Parse Output Tensor Meta - #4 by anasmk as it seems a very similar issue to what i’m facing. I have tried to generate dynamic facenet.ONNX file with dynamic batch,H & W and i though this will remove size limitation issue of facenet but yet bj_meta.obj_user_meta_list None for almost all detected objects.
Did you use the python deepstream-test2 to integrate facedetect and facenet?
Seems your facenet output embedding vector, how did you add user meta to object meta for the facenet output?
There is c/c++ sample for the models who output embbedding vectors. deepstream_tao_apps/apps/tao_others/deepstream-mdx-perception-app at master · NVIDIA-AI-IOT/deepstream_tao_apps (github.com)
Yes i’m using python sample 2.
[quote=“Fiona.Chen, post:14, topic:266201”]
how did you add user meta to object meta for the facenet output
[/quote] can you please clarify if there’s any sample for this? a python sample. I have added few lines as https://github.com/riotu-lab/deepstream-facenet/blob/master/deepstream_test_2.py but this didn’t solve this issue.
Please add “network-type=100” in network-type
There is sample for output embbedding vectors in c/c++deepstream_tao_apps/apps/tao_others/deepstream-mdx-perception-app at master · NVIDIA-AI-IOT/deepstream_tao_apps (github.com)
this actually caused the video to freeze. I think i have made some progress but i would like to ask if there’s any proper documentation which explains the basics like what’s object meta/ output tensors…etc
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.