I am customizing DeepStream Test App 1 into a face recognition system. I have downloaded the Face Detection model from NGC and integrated the nvtracker plugin to track the detected faces. I am aware that FaceNet is required to extract the embeddings from the detected human faces. I need some guidance on how to link the nvtracker output to FaceNet for obtaining the face embeddings of the tracked faces. I plan to use this information to develop a custom plugin for the integration.
Thanks in Advance
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
Could you refer to the demo: deepstream_faciallandmark_app.cpp?
Code
float square_size = MAX(obj_meta->rect_params.width,
obj_meta->rect_params.height);
float center_x = obj_meta->rect_params.width/2.0 +
obj_meta->rect_params.left;
float center_y = obj_meta->rect_params.height/2.0 +
obj_meta->rect_params.top;
/*Check the border*/
if(center_x < (square_size/2.0) || center_y < square_size/2.0 ||
center_x + square_size/2.0 > frame_width ||
center_y - square_size/2.0 > frame_height) {
g_print("Keep the original bbox\n");
} else {
obj_meta->rect_params.left = center_x - square_size/2.0;
obj_meta->rect_params.top = center_y - square_size/2.0;
obj_meta->rect_params.width = square_size;
obj_meta->rect_params.height = square_size;
}
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.