NvDsUserMeta GList access from python

Please provide complete information as applicable to your setup.

• Hardware Platform GPU
• DeepStream Version 6.3.0
• TensorRT Version v8503
• NVIDIA GPU Driver Version 530.30.02
• Issue Type questions

I’ve written a C++ plugin following the gst-dsexample guidelines, I can execute that plugin with the python API, and it seems to be running fine. Within the C++ plugin, I have a list that I want to be able to access from a python probe. After reading through the bindings doc, my understanding is I don’t need to write custom bindings for GList types, but maybe I’m wrong. So I have the following code in my C++ plugin to attach a specific list to the user metadata container:

GList *userList = NULL;
// Fill that list with certain integer values (ival) at constant size
for (int i=0; i<500; i++){
    userList = g_list_append(userList, GINT_TO_POINTER(ival));
};

#define NVDS_USER_META (nvds_get_user_meta_type("NVIDIA.NVINFER.USER_META"))
NvDsMetaType user_meta_type = NVDS_USER_META;

user_meta = nvds_acquire_user_meta_from_pool(batch_meta);
if (user_meta) {
    // Attach userList to meta
    user_meta->user_meta_data = userList;
    user_meta->base_meta.meta_type = user_meta_type;
    user_meta->base_meta.copy_func = NULL;
    user_meta->base_meta.release_func = NULL;
    //Attach user meta that contains userList to frame meta
    nvds_add_user_meta_to_frame(frame_meta, user_meta);

I then execute that stream with the python API and I can see the userList getting populated as expected, but I don’t see the user meta when I probe in python in the usual way:

batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
while l_frame is not None:
        try:
            l_frame = batch_meta.frame_meta_list
        except StopIteration:
            break
        frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        l_user = frame_meta.frame_user_meta_list
        while l_user is not None:
            user_meta = pyds.NvDsUserMeta.cast(l_user.data)
            print(user_meta.base_meta.meta_type)
            try:
                 l_user = l_user.next
           except StopIteration:
               break

I never see the GList that I thought I populated with the C++ plugin with this python code.

My understanding is the NvDsUserMeta object type is a GList as well, and we can just append to that list without needing to add binding or specifications elsewhere. Do I need to write or modify the pyds bindings to access that GList from python? Or do I need to execute different/additional C++ code in my plugin? Thanks!

1 Like

I’ve tried following the code in deepstream_user_metadata_app.c, eg

  NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf);

    for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
      l_frame = l_frame->next) {
	NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) (l_frame->data);

	/* Acquire NvDsUserMeta user meta from pool */
        user_meta = nvds_acquire_user_meta_from_pool(batch_meta);

        /* Set NvDsUserMeta below */
	user_meta->user_meta_data =(void*) set_metadata_ptr();
        user_meta->base_meta.meta_type = user_meta_type;
        user_meta->base_meta.copy_func = (NvDsMetaCopyFunc)copy_user_meta;
        user_meta->base_meta.release_func = (NvDsMetaReleaseFunc)release_user_meta;

/* We want to add NvDsUserMeta to frame level */
	nvds_add_user_meta_to_frame(frame_meta, user_meta);
    }

but I also don’t see any additional user meta attached to the frame meta from the python API. I’m running this with a segmentation model, and that is the only user meta I see from python, when I expect two per frame: pyds.NVDSINFER_SEGMENTATION_META and pyds.NVDS_USER_META

If I understand you correctly, you want to add NVDS_USER_META to the dsexample element and then get it in the downstream element in python, right?

You can refer to deepstream-custom-binding-test, add probe function to dsexample’s src pad, then add metadata, and then access it downstream. This is a simple approach, no need to modify the cpp code.

Hi @junshengy, I’ve run a CUDA kernel with the C++ API and I want to attach outputs of that kernel inNVDS_USER_META data to the frame buffer. Then I want to be able to retrieve that data with the python API when I run the pipeline with python. I don’t need to add metadata with the pyhon API, only retrieve data that was added upstream with a C++ plugin that calls a CUDA kernel. The problem is I can’t see the NVDS_USER_META when I run the pipeline with the python API. Does that make sense? Thanks

The functionality you want is achievable.

You can refer to my patch, add user metadata in dsexample, and access it in test1(python).

For your code, if you want to add a custom NVDS_USER_META type, you need to implement the corresponding python bindings, otherwise it will not be accessible in python. You can refer to the cpp file bindnvdsmeta.cpp

diff --git a/sources/gst-plugins/gst-dsexample/gstdsexample.cpp b/sources/gst-plugins/gst-dsexample/gstdsexample.cpp
index d5399c7..5050401 100644
--- a/sources/gst-plugins/gst-dsexample/gstdsexample.cpp
+++ b/sources/gst-plugins/gst-dsexample/gstdsexample.cpp
@@ -318,6 +318,65 @@ gst_dsexample_get_property (GObject * object, guint prop_id,
   }
 }
 
+//#define NVDS_GST_META_BEFORE_DECODER_EXAMPLE (nvds_get_user_meta_type((char *)"NVIDIA.DECODER.GST_META_BEFORE_DECODER"))
+guint parsed_frame_number = 0;
+
+typedef struct _H264parseMeta
+{
+  guint parser_frame_num;
+} H264parseMeta;
+
+/* gst meta copy function set by user */
+static gpointer h264parse_meta_copy_func(gpointer data, gpointer user_data)
+{
+  NvDsUserMeta *user_meta = (NvDsUserMeta *) data;
+  H264parseMeta *src_h264parse_meta = (H264parseMeta *)user_meta->user_meta_data;
+  H264parseMeta *dst_h264parse_meta = (H264parseMeta*)g_malloc0(
+      sizeof(H264parseMeta));
+  memcpy(dst_h264parse_meta, src_h264parse_meta, sizeof(H264parseMeta));
+  return (gpointer)dst_h264parse_meta;
+}
+
+/* gst meta release function set by user */
+static void h264parse_meta_release_func(gpointer data, gpointer user_data)
+{
+  NvDsUserMeta *user_meta = (NvDsUserMeta *) data;
+  H264parseMeta *h264parse_meta = (H264parseMeta *)user_meta->user_meta_data;
+  if(h264parse_meta) {
+    g_free(h264parse_meta);
+    h264parse_meta = NULL;
+  }
+  user_meta->user_meta_data = NULL;
+}
+
+void attach_frame_custom_metadata (NvDsBatchMeta *batch_meta, NvDsFrameMeta *frame_meta)
+{
+  H264parseMeta *h264parse_meta = (H264parseMeta *)g_malloc0(sizeof(H264parseMeta));
+  if(h264parse_meta == NULL)
+  {
+    g_print("no buffer abort !!!! \n");
+    abort();
+  }
+  /* Add dummy metadata */
+  h264parse_meta->parser_frame_num = parsed_frame_number++;
+
+  NvDsUserMeta *user_meta =
+            nvds_acquire_user_meta_from_pool (batch_meta);
+  if (user_meta) {
+    user_meta->user_meta_data = (void *) h264parse_meta;
+    user_meta->base_meta.meta_type = NVDS_USER_META;
+    user_meta->base_meta.copy_func =
+        (NvDsMetaCopyFunc) h264parse_meta_copy_func;
+    user_meta->base_meta.release_func =
+        (NvDsMetaReleaseFunc) h264parse_meta_release_func;
+    nvds_add_user_meta_to_frame (frame_meta, user_meta);
+  }
+
+  g_print("GST H264parse Meta attached for Frame_Num = %d\n",
+      h264parse_meta->parser_frame_num);
+  g_print("Attached Metadata before decoder: Parsed frame_num = %d\n\n", h264parse_meta->parser_frame_num);
+}
+
 /**
  * Initialize all resources and start the output thread
  */
@@ -759,6 +818,7 @@ gst_dsexample_transform_ip (GstBaseTransform * btrans, GstBuffer * inbuf)
 #endif
       /* Attach the metadata for the full frame */
       attach_metadata_full_frame (dsexample, frame_meta, scale_ratio, output, i);
+      attach_frame_custom_metadata(batch_meta, frame_meta);
       i++;
       free (output);
     }
diff --git a/apps/deepstream-test1/deepstream_test_1.py b/apps/deepstream-test1/deepstream_test_1.py
index 861cefc..187ddd6 100755
--- a/apps/deepstream-test1/deepstream_test_1.py
+++ b/apps/deepstream-test1/deepstream_test_1.py
@@ -47,6 +47,7 @@ def osd_sink_pad_buffer_probe(pad,info,u_data):
     # Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
     # C address of gst_buffer as input, which is obtained with hash(gst_buffer)
     batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
+
     l_frame = batch_meta.frame_meta_list
     while l_frame is not None:
         try:
@@ -82,6 +83,19 @@ def osd_sink_pad_buffer_probe(pad,info,u_data):
             except StopIteration:
                 break
 
+        l_user=frame_meta.frame_user_meta_list
+        while l_user is not None:
+            try:
+                # Casting l_obj.data to pyds.NvDsUserMeta
+                user_meta=pyds.NvDsUserMeta.cast(l_user.data)
+                if (user_meta and user_meta.base_meta.meta_type == pyds.NvDsMetaType.NVDS_USER_META):
+                    print("got user meta.....")
+            except StopIteration:
+                break
+            try: 
+                l_user=l_user.next
+            except StopIteration:
+                break
         # Acquiring a display meta object. The memory ownership remains in
         # the C code so downstream plugins can still access it. Otherwise
         # the garbage collector will claim it when this probe function exits.
@@ -167,6 +181,10 @@ def main(args):
     if not pgie:
         sys.stderr.write(" Unable to create pgie \n")
 
+    dsexample = Gst.ElementFactory.make("dsexample", "dsexample")
+    if not dsexample:
+        sys.stderr.write(" Unable to create dsexample \n")
+
     # Use convertor to convert from NV12 to RGBA as required by nvosd
     nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
     if not nvvidconv:
@@ -186,7 +204,8 @@ def main(args):
             sys.stderr.write(" Unable to create nv3dsink \n")
     else:
         print("Creating EGLSink \n")
-        sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")
+        # sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")
+        sink = Gst.ElementFactory.make("fakesink", "nvvideo-renderer")
         if not sink:
             sys.stderr.write(" Unable to create egl sink \n")
 
@@ -206,6 +225,7 @@ def main(args):
     pipeline.add(decoder)
     pipeline.add(streammux)
     pipeline.add(pgie)
+    pipeline.add(dsexample)
     pipeline.add(nvvidconv)
     pipeline.add(nvosd)
     pipeline.add(sink)
@@ -225,7 +245,8 @@ def main(args):
         sys.stderr.write(" Unable to get source pad of decoder \n")
     srcpad.link(sinkpad)
     streammux.link(pgie)
-    pgie.link(nvvidconv)
+    pgie.link(dsexample)
+    dsexample.link(nvvidconv)
     nvvidconv.link(nvosd)
     nvosd.link(sink)

Thanks @junshengy. I’ve copied your cpp code exactly into my plugin and that seems to be running fine as I’m seeing these outputs on the console at each loop as the pipeline runs:

GST H264parse Meta attached for Frame_Num = 1
Attached Metadata before decoder: Parsed frame_num = 1
GST H264parse Meta attached for Frame_Num = 2
Attached Metadata before decoder: Parsed frame_num = 2

I also copied your python probe code exactly, however I am still not seeing any NvDsUserMeta from python. I am running this dsexample plugin in-line with a segmentation model and the only l_obj.data type I see from python is NvDsMetaType.NVDSINFER_SEGMENTATION_META. My python probe is only a couple elements downstream from the dsexample element, does the probe need to be moved even further downstream maybe? Can you confirm that you see NvDsMetaType.NVDS_USER_META from python? Thanks

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Yes, I’m sure. Please refer to my patch first, which is for test1.
If the patch is patched, the following log will be output.

As I mentioned above, if you want to add a custom META in python, you need to add the corresponding python binding.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.