Integrated deepstream-pose-estimation into deepstream-app with smalll display issue

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU)**TX2
• DeepStream Version5.0.1
**• JetPack Version (valid for Jetson only)**4.4
• TensorRT Version7
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi Joaquin

First of all, great work!

I’ve integrated pose estimation module as primary or second GIE detector successfully based on your recent work. Only has a small issue found. When in tiled display mode, the lines are correct but circles are in wrong places and scale. The
circles displayed as if they are in single channel mode. When you pick a single window, everything is correct. You can see two screen shots demonstrate this issue.


I’ve check your code and osd plugin’s but have no luck. Please provide some advices.

Thanks,

1 Like

Yeah, seems the circle cannot work with tiler, will check it.

2 Likes

Thanks. Looking forward to the fix.

@AndySimcoe That’s an interesting issue, I think I might have an idea towards fixing it. Would you feel comfortable sharing your merged source code with DeepStream app on GitHub or something?

1 Like

Hey @AndySimcoe, could you share your change with us to repro the issue?

Hi

The main steps are as follows:

  1. In deepstream-app main(), I add a probe at the src pad of secondary gie since I use primary gie with a yolo detector:
    GstPad *src_pad = NULL;

     GstElement *pose = appCtx[i]->pipeline.common_elements.secondary_gie_bin.sub_bins[0].secondary_gie;
    
     src_pad = gst_element_get_static_pad (pose, "src");
    
     if (!src_pad)
    
         g_print ("Unable to get primary_gie src pad\n");
    
     else
    
     {
    
         gst_pad_add_probe(src_pad, GST_PAD_PROBE_TYPE_BUFFER,
    
                   pgie_src_pad_buffer_probe, NULL, NULL);
    
         gst_object_unref (src_pad);
    
     }
    
  2. In this probe function, I use a function – “pose_meta_data()” to deal with pose estimation meta data which I put it into another cpp source file named pose_meta.cpp:
    static GstPadProbeReturn

pgie_src_pad_buffer_probe(GstPad *pad, GstPadProbeInfo *info, gpointer u_data)

{

//gchar *msg = NULL;

GstBuffer *buf = (GstBuffer *)info->data;

NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta(buf);

pose_meta_data(batch_meta);

return GST_PAD_PROBE_OK;

}

3)In pose_mate.cpp, I just changed a little bit of your original code:
extern “C” void

pose_meta_data(NvDsBatchMeta *batch_meta)

{

NvDsMetaList *l_frame = NULL;

NvDsMetaList *l_obj = NULL;

NvDsMetaList *l_user = NULL;



for (l_frame = batch_meta->frame_meta_list; l_frame != NULL; l_frame = l_frame->next) {

    NvDsFrameMeta *frame_meta = (NvDsFrameMeta *)(l_frame->data);

    for (l_user = frame_meta->frame_user_meta_list; l_user != NULL; l_user = l_user->next) {

        NvDsUserMeta *user_meta = (NvDsUserMeta *)l_user->data;

        if (user_meta->base_meta.meta_type == NVDSINFER_TENSOR_OUTPUT_META)

        {

            NvDsInferTensorMeta *tensor_meta =

                (NvDsInferTensorMeta *)user_meta->user_meta_data;

            Vec2D<int> objects;

            Vec3D<float> normalized_peaks;

            tie(objects, normalized_peaks) = parse_objects_from_tensor_meta(tensor_meta);

            create_display_meta(objects, normalized_peaks, frame_meta, frame_meta->source_frame_width, frame_meta->source_frame_height);

        }

    }

    for (l_obj = frame_meta->obj_meta_list; l_obj != NULL; l_obj = l_obj->next) {

        NvDsObjectMeta *obj_meta = (NvDsObjectMeta *)l_obj->data;

        for (l_user = obj_meta->obj_user_meta_list; l_user != NULL;

            l_user = l_user->next) {

            NvDsUserMeta *user_meta = (NvDsUserMeta *)l_user->data;

            if (user_meta->base_meta.meta_type == NVDSINFER_TENSOR_OUTPUT_META) {

                NvDsInferTensorMeta *tensor_meta =

                    (NvDsInferTensorMeta *)user_meta->user_meta_data;

                Vec2D<int> objects;

                Vec3D<float> normalized_peaks;

                tie(objects, normalized_peaks) = parse_objects_from_tensor_meta(tensor_meta);

                create_display_meta(objects, normalized_peaks, frame_meta, frame_meta->source_frame_width, frame_meta->source_frame_height);

            }

        }

    }

}

return;

}

The other functions are not touched.

4)I’ve set secondary gie’s propperty as:
g_object_set(G_OBJECT(pgie), “output-tensor-meta”, TRUE,…

Hey, thanks, would you mind to share the whole directory and files with us, so I don’t need to handle the build error and save our time to fix the issue?

Hi, my codebase has integrated some other modules which have dependencies. I’ll compose a simple clean deepstream-app specifically for this issue and share the links later.

Hi, I just built a demo unit of deepstream-app based on your pose estimation code to demonstrate this issue. I’ve tried to upload an engine for 4 video stream with the source code for your convenience as well but unsuccessfully --maybe due to the file uploading size restriction?
Just go to the deepstream-app folder and execute the command: ./deepstream-app -c deepstream_app_config_pose.txt.

deepstream-app-pose.zip (421.2 KB)

Hi,

Any progress towards the fixing?

Hey, we are looking into it, will update ASAP.

Thanks for you report the issue.
I can repro your issue locally, and the current tiler cannot work well with Circle, and we will add the feature in later DS release.

Thank you and looking forward to the new release!

Your app looks really great!
I’m newbie in Deepstream and I don’t know how to custom parse pose function like in /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD. I see just have sample for object detection and classification. Can you help me? Thanks so much!

Hey, please create a new topic for your issue.

4)I’ve set secondary gie’s propperty as:
g_object_set(G_OBJECT(pgie), “output-tensor-meta”, TRUE,…

Excuse me, does this sentence means that add to deepstream_app_main.c? Which row?

That will be able to use yolo and pose estimation together? Whether to use primary-gie and secondary-gie? I don’t understand the meaning.

Hey, please create a new topic for your question if you need further support.

Add "output-tensor-meta=1"in deepstream_pose_estimation_config.txt,and the app will run normally