How to pass the 5 landmarks of retinaface and perform face alignment between pgie and sgie?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
Jetson NX

• DeepStream Version
Deepstream 5.0

• JetPack Version (valid for Jetson only)
JetPack 4.4 (L4T 32.4.4)

• TensorRT Version
TensorRT 7.1.3.0

• NVIDIA GPU Driver Version (valid for GPU only)
CUDA 10.2.89, CUDNN 8.0, Driver unknown

• Issue Type( questions, new requirements, bugs)


For face recognition module, we have created a custom parser function for retinaface. It works fine and we can see the output in the custom parser function. We are following the example given in NVIDIA DeepStream SDK API Reference: nvdsinfer_custom_impl.h File Reference.

First question, we have no idea on how to pass the bbox and 5 landmarks into next stage or into metadata? For example, there was no clear instruction on how to pass the objectList (as following function) into metadata.

typedef bool (* NvDsInferParseCustomFunc) (
std::vector const &outputLayersInfo,
NvDsInferNetworkInfo const &networkInfo,
NvDsInferParseDetectionParams const &detectionParams,
std::vector &objectList);

Second question is how to do a face alignment or face warping in between pgie and sgie as shown in the attached image for face recognition pipeline?

Third question is how to produce a customize output in sgie where we need the face features for doing face matching? You can think the features as 128 numbers or 512 numbers from secondary classifier.

It would be good if you can point us a direct example on how to achieve this goal.

Here are few posts we have followed but still we have no clue on how to achieve this goal.

Thanks.

1 Like

Hi @jane.shen1,
To question#1:
you can check function attach_metadata_detector() in file - /opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-nvinfer/gstnvinfer_meta_utils.cpp , this function fill the detection output into meta data - (NvDsObjectMeta *)obj_meta . It already supports to add bbox into metadata, you can add landmarks into “NvDsUserMetaList *obj_user_meta_list;” of obj_meta. obj_meta will pass to the following stages of the pipeline so that you can access it.

To question#2:
If face alignment is a fast processing, you can add a probe on the sink pad of sgie to do face alignment, otherwise, you can add a gst plugin between pgie and sgie to do face alignment.

To question#3.
You can add a customized post-processing, for example, nvdsinfer_custombboxparser_yolov3_tlt.cpp under https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/tree/master/nvdsinfer_customparser_yolov3_tlt

Hi @mchi,
This is regarding the approach you suggested for question 1.
If i were to modify the “attach_metadata_detector” function in gstnvinfer_meta_utils.cpp to parse 5 landmarks, how should i add the array which contains the landmarks to the object’s obj_user_meta_list, is this the correct approach?

**The set_meta_ptr, user_meta_type, copy_usr_meta, release_user_meta functions are adopted from the deepstream_user_metadata_app.c in sources/sample_apps

 void attach_metadata_detector (GstNvInfer * nvinfer, GstMiniObject * tensor_out_object,
GstNvInferFrame & frame, NvDsInferDetectionOutput & detection_output, float segmentationThreshold) 
{
NvDsObjectMeta *obj_meta = NULL;
NvDsUserMetaList * user_meta_list = NULL;
 ...
 ...
  for (guint i = 0; i < detection_output.numObjects; i++) {
    NvDsInferObject & obj = detection_output.objects[i];
    ...
    obj_meta = nvds_acquire_obj_meta_from_pool (batch_meta);
    user_meta_list = obj_meta->obj_user_meta_list;
    NvDsUserMeta* um1;
    um1->user_meta_data = set_metadata_ptr(&(obj.landmarks)[0]);  //Add landmarks here
    um1->base_meta.meta_type = user_meta_type;
    um1->base_meta.copy_func = (NvDsMetaCopyFunc)copy_user_meta;
    um1->base_meta.release_func = (NvDsMetaReleaseFunc)release_user_meta;
    user_meta_list = g_list_append(user_meta_list,um1);
    ...
    }
...
}

set_meta_ptr, user_meta_type, copy_usr_meta, release_user_meta :

void *set_metadata_ptr(float* arr)
{
     int i = 0;
     gfloat *user_metadata = (gfloat*)g_malloc0(10*sizeof(gfloat));

  for(i = 0; i < 10; i++) {
       user_metadata[i] = arr[i];
  }
  return (void *)user_metadata;
}



static gpointer copy_user_meta(gpointer data, gpointer user_data)
{
      NvDsUserMeta *user_meta = (NvDsUserMeta *)data;
      gfloat *src_user_metadata = (gfloat*)user_meta->user_meta_data;
      gfloat *dst_user_metadata = (gfloat*)g_malloc0(10*sizeof(gfloat));
      memcpy(dst_user_metadata, src_user_metadata, 10*sizeof(gfloat));
      return (gpointer)dst_user_metadata;
}

static void release_user_meta(gpointer data, gpointer user_data)
{
      NvDsUserMeta *user_meta = (NvDsUserMeta *) data;
      if(user_meta->user_meta_data) {
          g_free(user_meta->user_meta_data);
          user_meta->user_meta_data = NULL;
      }
 }

I tried this way and it failed. Having this error:
*** stack smashing detected ***: terminated

Any hints on how to proceed / correct the obj_user_meta_list
part?

Thanks

@mchi,
I also tried modifying the NvDsInferObjectDetectionInfo struct in nvdsinfer.h and NvDsInferObject struct in nvdsinfer_context.h by adding in float* landmarks for my landmarks.
Through this way, i can easily add landmarks to NvDsInferObjectDetectionInfo object in my parser function, but i couldn’t access this landmarks in gstnvinfer_meta_utils.cpp through the NvDsInferDetectionOutput. I have seen the original code access the bounding box through this method, any idea why it doesn’t work for me in this case? Can’t i add my own variable to both NvDsInferObjectDetectionInfo and NvDsInferObject ?

Thanks.

Hi @chionjetherng,
Sorry! I think you can refer to pgie_pad_buffer_probe() function in /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-infer-tensor-meta-test/deepstream_infer_tensor_meta_test.cpp.
With “output-tensor-meta=1” properity set to pgie, pgie would attach tensor output data into each frame’s frame_user_meta_list, then you can parse the output to extra the landmarks data,

static GstPadProbeReturn
pgie_pad_buffer_probe (GstPad * pad, GstPadProbeInfo * info, gpointer u_data)
{
...
  /* Iterate each frame metadata in batch */
  for (NvDsMetaList * l_frame = batch_meta->frame_meta_list; l_frame != NULL;
      l_frame = l_frame->next) {
    NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) l_frame->data;

    /* Iterate user metadata in frames to search PGIE's tensor metadata */
    for (NvDsMetaList * l_user = frame_meta->frame_user_meta_list;
        l_user != NULL; l_user = l_user->next) {
      NvDsUserMeta *user_meta = (NvDsUserMeta *) l_user->data;
      if (user_meta->base_meta.meta_type != NVDSINFER_TENSOR_OUTPUT_META)
        continue;
     ...
        /* Parse output tensor and fill detection results into objectList. */
         ..
        NvDsInferParseCustomResnet (outputLayersInfo, networkInfo,
                    detectionParams, objectList);          // -----> replace with your own output parser to get landmarks data in objectlist
       ...
      /* Iterate final rectangules and attach result into frame's obj_meta_list. */
     for (const auto & rect:objlist) {
      NvDsObjectMeta *obj_meta =
          nvds_acquire_obj_meta_from_pool (batch_meta);
       ...
      }
      ...
}

@mchi,
Currently, i have a pipeline which looks like this.
… streammux → PGIE → queue → nvvidconv → dsexample → …

I have a probe which looks quite similar to the
pgie_pad_buffer_probe you shared above at the source of queue element.

For simplicity, here is a dummy example to illustrate what i am trying to achieve. Essentially, i wish to add in new values in the probe, and retrieve it in dsexample.

I modified the NvDsObjectMeta structure inside nvdsmeta.h by adding a new variable (gfloat* dummy_array). I tried to set the value of dummy_array of NvDsObjectMeta in the probe.

 /* Iterate final rectangules and attach result into frame's obj_meta_list. */
 ...
 NvDsObjectMeta *obj_meta =
      nvds_acquire_obj_meta_from_pool (batch_meta);

 obj_meta->dummy_array = (gfloat*)g_malloc0(20);

 memcpy((void*)obj_meta->dummy_array, (void*)some_array, 20);
...

but this doesn’t work. This will cause memory leak.

I also tried another example, which is to set an integer instead of an array. I added a gint val to NvDsObjectMeta structure. I changed to probe to :

 /* Iterate final rectangules and attach result into frame's obj_meta_list. */
 ...
 NvDsObjectMeta *obj_meta =
      nvds_acquire_obj_meta_from_pool (batch_meta);

 obj_meta->val = 555;
 obj_meta->class_id = c;
...

I tried reading the value of obj_meta->val in dsexample, it outputs wrong value. But the value of class_id is correct. Any comments?

Thanks

Hi @chionjetherng,
Sorry for late response!
Have you got it solved?
Another idea is you can allocate a memory to save the landmark data and save the pointer in " [misc_obj_info [ MAX_USER_FIELDS ]](NVIDIA DeepStream SDK API Reference: Main Page)"

Hi,

I am following the same… I am having trouble while adding element probe for “pgie_pad_buffer_probe”.

I have done this -
if (config->primary_gie_config.enable)
{
NVGSTDS_ELEM_ADD_PROBE(pipeline>common_elements.primary_tensor_buffer_probe_id, pipeline->common_elements.tracker_bin.queue, “src”, pgie_pad_buffer_probe, GST_PAD_PROBE_TYPE_BUFFER,
&pipeline->common_elements);
}

in “create_common_elements”. But it is giving me segmentation fault.
Any suggestions/steps to use “pgie_pad_buffer_probe”?
Thanks.

Have you solved the problem? Can you share your experience?

Hi @jane.shen1 ,
How can you do a face alignment or face warping in between pgie and sgie? I’m having same problem. please help me.

2 Likes

Hi @jane.shen1 ,

Would you mind to share how to create a custom parser function for retinaface?

Or give me some hints or experience about that, please.

I really need to create a parser for retinaface in my work!!

2 Likes