Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson • DeepStream Version 6.2 • JetPack Version (valid for Jetson only) 5.1 • TensorRT Version 8.5.2
I’m trying to implement the below pipeline in deep stream
Face detection → Tracking → Recognition
I’m using Facedetect for detection(FaceDetect | NVIDIA NGC) and facenet for recognition. I have tested the detection & tracking parts and both are working well. However when I added face recognition (Facenet model converted to ONNX) to the pipeline by writing over Test-2 sample this generated an error. I have attached the full log of error. DS_Facedetec+faceneterror (55.0 KB)
I have used .etlt file INT8 (detection), .ONNX FP16 (recognition) and from log I can see that for primary/secondary configs give the expected input/ output dimensions. Facenet (Recognitionmodel) takes 160x160 and gives 512 embedding.
Can you please advise how to solve this error
Does that mean that there’s ab error in converting the facenet model from Pytorch to ONNX? I have followed tensorrt official guide.
Additionally where to config the facenet as recognition i have wrote over test-2 sample where 2nd configuration is the recognition part
No. When you choose the model, please also understand the input and output of the model. There are thousands of models for different purpose, the TensorRT only do the ineferencing, you need to do the pre-process and post-process.
In my case Facedetect gives two outputs and Facenet gives one output which is the 512 embeddings. I wrote over test-2 to add the classification which compares the generated embeddings with embeddings in a dataset. Can you please support what preprocessing/post processing technique should be used and where to add it in the above pipeline
is there any documentation for the user_meta? I;m trying to understand more about it. I can run the pipeline without errors now but i think detected faces in PGIE are not passed to SGIE because these faces have to be resized to (160,160). How can i add add this pre-processing to the pipeline. I’m using test-2 python app sample
which property in the config file will do the scaling part?
I have tried to disable input/output tensort-meta to use the defult preprocess but obj_meta.obj_user_meta_list None always none. when enabling
output tensort-meta only obj_meta.obj_user_meta_list passed one face out of all detected faces
I was not able to figure out the issue. I have followed steps in Facenet with DeepStream Python Not Able to Parse Output Tensor Meta - #4 by anasmk as it seems a very similar issue to what i’m facing. I have tried to generate dynamic facenet.ONNX file with dynamic batch,H & W and i though this will remove size limitation issue of facenet but yet bj_meta.obj_user_meta_list None for almost all detected objects.
[quote=“Fiona.Chen, post:14, topic:266201”]
how did you add user meta to object meta for the facenet output
[/quote] can you please clarify if there’s any sample for this? a python sample. I have added few lines as https://github.com/riotu-lab/deepstream-facenet/blob/master/deepstream_test_2.py but this didn’t solve this issue.
this actually caused the video to freeze. I think i have made some progress but i would like to ask if there’s any proper documentation which explains the basics like what’s object meta/ output tensors…etc