How to pass RetinaFace landmarks in custom parser function in Deepstream6.1

Please provide complete information as applicable to your setup.

• Hardware Platform (GPU)
• DeepStream Version
• TensorRT Version
• Issue Type( questions, new requirements, bugs)

I have an issue regarding to RetinaFace landmarks, I’m able to pass the bbox usingNvDsInferObjectDetectionInfo from the custom_parser _function.cpp . But I couldn’t pass the landmarks, I have reviewed many related issues and I followed all their suggestions but didn’t work .

How I add the landmarks, I need these landmarks so I can process them in the next stage which is alignment?

please refer to deepstream_tao_apps/deepstream_faciallandmark_app.cpp at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub
the landmarks will be attached to usermeta, which downstream plugins can access.

This approach is different , I’m using RetinaFace for detection and extracting the landmarks. It has 3 output layers , one for bbox which is 4 points , one for the landmarks which has 10 points and the last one is the confidence. NvDsInferObjectDetectionInfo has left , right, top, bottom, classid and confidence only. How can I pass the landmarks??

the inference output is NvDsInferLayerInfo, not NvDsInferObjectDetectionInfo, this code is used to parse inference output, deepstream_tao_apps/deepstream_faciallandmark_app.cpp at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub, you can use usermeta to pass whatever data, please refer to nvds_add_facemark_meta .

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.