Hi @chionjetherng,
Sorry! I think you can refer to pgie_pad_buffer_probe() function in /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-infer-tensor-meta-test/deepstream_infer_tensor_meta_test.cpp.
With “output-tensor-meta=1” properity set to pgie, pgie would attach tensor output data into each frame’s frame_user_meta_list, then you can parse the output to extra the landmarks data,
static GstPadProbeReturn
pgie_pad_buffer_probe (GstPad * pad, GstPadProbeInfo * info, gpointer u_data)
{
...
/* Iterate each frame metadata in batch */
for (NvDsMetaList * l_frame = batch_meta->frame_meta_list; l_frame != NULL;
l_frame = l_frame->next) {
NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) l_frame->data;
/* Iterate user metadata in frames to search PGIE's tensor metadata */
for (NvDsMetaList * l_user = frame_meta->frame_user_meta_list;
l_user != NULL; l_user = l_user->next) {
NvDsUserMeta *user_meta = (NvDsUserMeta *) l_user->data;
if (user_meta->base_meta.meta_type != NVDSINFER_TENSOR_OUTPUT_META)
continue;
...
/* Parse output tensor and fill detection results into objectList. */
..
NvDsInferParseCustomResnet (outputLayersInfo, networkInfo,
detectionParams, objectList); // -----> replace with your own output parser to get landmarks data in objectlist
...
/* Iterate final rectangules and attach result into frame's obj_meta_list. */
for (const auto & rect:objlist) {
NvDsObjectMeta *obj_meta =
nvds_acquire_obj_meta_from_pool (batch_meta);
...
}
...
}