Please provide complete information as applicable to your setup.
• Hardware Platform (GPU)
• DeepStream Version
• TensorRT Version
• Issue Type( questions, new requirements, bugs)
I have an issue regarding to RetinaFace landmarks, I’m able to pass the bbox using
NvDsInferObjectDetectionInfo from the
custom_parser _function.cpp . But I couldn’t pass the landmarks, I have reviewed many related issues and I followed all their suggestions but didn’t work .
How I add the landmarks, I need these landmarks so I can process them in the next stage which is alignment?
This approach is different , I’m using RetinaFace for detection and extracting the landmarks. It has 3 output layers , one for bbox which is 4 points , one for the landmarks which has 10 points and the last one is the confidence.
NvDsInferObjectDetectionInfo has left , right, top, bottom, classid and confidence only. How can I pass the landmarks??
the inference output is NvDsInferLayerInfo, not NvDsInferObjectDetectionInfo, this code is used to parse inference output, deepstream_tao_apps/deepstream_faciallandmark_app.cpp at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub, you can use usermeta to pass whatever data, please refer to nvds_add_facemark_meta .