Tried to get user_meta from sgie, got None

Please provide complete information as applicable to your setup.

• GTX 1070
• DeepStream 5.1
• TensorRT 7.2.3
• Nvidia Driver 465
• Issue Type( questions, new requirements, bugs)

I used the code here: (GitHub - preronamajumder/deepstream-lpr-python-version: Python version for NVIDIA Deepstream's LPR., which is the python version of (GitHub - NVIDIA-AI-IOT/deepstream_lpr_app: Sample app code for LPR deployment on DeepStream), to do some experiments.
I changed the second gie’s mode to my own model and then added the code:

if not sgie_src_pad:
    sys.stderr.write(" Unable to get sgie 1 pad \n")
    sgie_src_pad.add_probe(Gst.PadProbeType.BUFFER, sgie_src_pad_buffer_probe, 0, args)

I tried to darw the shape of the cars on the output parser function sgie_src_pad_buffer_probe.
However, the code below doesn’t work because user_mete is None:

gst_buffer = info.get_buffer()
if not gst_buffer:
    print("Unable to get GstBuffer ")

lp_dict = {}
# Retrieve batch metadata from the gst_buffer
# Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
# C address of gst_buffer as input, which is obtained with hash(gst_buffer)
batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
l_frame = batch_meta.frame_meta_list
#for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
#   l_frame = l_frame->next)
while l_frame is not None:
        # Note that needs a cast to pyds.NvDsFrameMeta
        # The casting is done by pyds.NvDsFrameMeta.cast()
        # The casting also keeps ownership of the underlying memory
        # in the C code, so the Python garbage collector will leave
        # it alone.
        frame_meta = pyds.NvDsFrameMeta.cast(
    except StopIteration:

    print("Frame Number is ", frame_meta.frame_num)
    print("Source id is ", frame_meta.source_id)
    print("Batch id is ", frame_meta.batch_id)
    print("Source Frame Width ", frame_meta.source_frame_width)
    print("Source Frame Height ", frame_meta.source_frame_height)
    print("Num object meta ", frame_meta.num_obj_meta)

    l_obj = frame_meta.obj_meta_list
    while l_obj is not None:
            obj_meta = pyds.NvDsObjectMeta.cast(
            user_meta_list = obj_meta.obj_user_meta_list

I also tried another way to get the user meta data but still got None:

    l_user = frame_meta.frame_user_meta_list

Why this happened? I only add my own call back function and change the sgie1’s model and config file. At least shouldn’t it give me some mete data so I can convert them to tensor even it is wrong data?

To be more clearly, the pgie detect the car, the second gie should get the input of the car with the bounding box, and output a matrix of the car’s shape. How should I get the output data which is a matrix of my secondary gie?

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Does this issue only occur when you do the modifications?


Hi! Yes I tried it with the original pgie and original sgie, I cannot get the user_data to handle the output of the model.
I changed the sgie1.get_static_pad(“src”) to pgie. and also keeped the original sgie, it cannot get the user_data either.

if not sgie_src_pad:
    sys.stderr.write(" Unable to get sgie 1 pad \n")
    sgie_src_pad.add_probe(Gst.PadProbeType.BUFFER, sgie_src_pad_buffer_probe, 0, args)

Hey customer,
Where did you add the user meta? for how to add user meta, you can refer deepstream-user-metadata-test

Hi @bcao
I just gave up and tried to build the pipeline from other sample and it works now. It seems the LPR deployment on deepstream’s pipeline just cannot get the metadata even if I tried the C code.

Can I ask another question that if I change the pgie with a model which can detect both cars and human. Is there any examples that I can build the pipeline to input the cars’s bbox meta data into a sgie for handling cars and input the human’s bbox meta data into a sige for handling humans?


I did another test for deepstream_reference_apps/back-to-back-detectors at master · NVIDIA-AI-IOT/deepstream_reference_apps · GitHub

I simply add function

static GstPadProbeReturn sgie__src_pad_buffer_probe(GstPad *pad, GstPadProbeInfo *info, gpointer u_data){
  gchar *msg = NULL;
  GstBuffer *buf = (GstBuffer *)info->data;
  NvDsMetaList *l_frame = NULL;
  NvDsMetaList *l_obj = NULL;
  NvDsMetaList *l_user = NULL;
  NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta(buf);
  printf("check point 1\n");
  for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
       l_frame = l_frame->next)
    NvDsFrameMeta *frame_meta = (NvDsFrameMeta *)(l_frame->data);
    int layer_number = 0;
    for (l_user = frame_meta->frame_user_meta_list; l_user != NULL;
         l_user = l_user->next)
        printf("check point 2\n");

and added

  GstPad *sgie1_src_pad = gst_element_get_static_pad(secondary_detector, "src");
  if (!sgie1_src_pad)
    g_print("Unable to get sgie1 src pad\n");
    gst_pad_add_probe(sgie1_src_pad, GST_PAD_PROBE_TYPE_BUFFER,
                      sgie__src_pad_buffer_probe, (gpointer)sink, NULL);

at the end.

I got the same problem that I cannot get the user data.

It just doesn’t work again. I can get the out put of check point 1, but there is no print of “check point 2”.
Why this happens? It is just a simple code and I want to access the output of my sgie.

Or I want to make this question more simple.

How can I get the output data from the model of pgie and sgie in deepstream deepstream_reference_apps/back-to-back-detectors at master · NVIDIA-AI-IOT/deepstream_reference_apps · GitHub?

I mean the output from literally model directly, in the format of NvDsInferTensorMeta, or something else.
For example if I have an ONNX model which has an output of 3x3 matrix, and i replace it with the sgie from deepstream_reference_apps/back-to-back-detectors at master · NVIDIA-AI-IOT/deepstream_reference_apps · GitHub to my onnx model, how can I get the 3x3 matrix output?

Hey customer,
It’s better to create a new topic for your new question.

For how to get NvDsInferTensorMeta, you can refer deepstream-infer-tensor-meta-test


I still don’t know what happened. But when I change the all the code to base on deepstream-infer-tensor-meta-test, the problem gone.