Unable to access the source id of the metadata

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi! I am having problem to access the source_id of my batch_user_meta_list.

In my pipeline I have streammux → preprocess → pgie1 → pgie2 -->tracker → tiler → nvvidconv → nvvidconv_postosd → caps → encoder → rtppay → sink

I have a prob function at the src pad of pgie1.

I have 6 no. of source but my preprocess element works on 3 of them. Then my pgie1 outputs the tensor meta for these 3 sources roi.

My probe function is defined as:

def tiler_sink_pad_buffer_probe_v2(pad,info, u_data):
    frame_number=0
    gst_buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer ")
        return
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    l_frame = batch_meta.frame_meta_list            
    l_user = batch_meta.batch_user_meta_list

    source_idx = [] # To store the source id in order as they appear in each batch
    array1 = [] # To store the source id that is in preprocess config file as they appear in meta
    while l_frame is not None:        
        try:
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break
        # Get the source id for this batch
        source_id = frame_meta.source_id
        frame_number = frame_meta.frame_num
        source_idx.append(source_id)

        # This code will be executed when all the frames in batch is processed and we get the source id in order
        if len(source_idx) == 6:
            print("---------------------------------------------------")
            print("Frame NO:{}".format(frame_number))
            print(source_idx)
            print("---------------------------------------------------")
            for i in source_idx:
                if i in hardcoded_source_ids: # hardcoded_source_ids represents the sources that will go under preprocess element
                    array1.append(i)
                    if len(array1) == len(hardcoded_source_ids):
                        print(array1)
            j = 0            
            while l_user is not None:
                user_meta= pyds.NvDsUserMeta.cast(l_user.data)
                # print(dir(user_meta))
                # print(dir(user_meta.base_meta))    
                # print(user_meta.user_meta_data)
                if user_meta and user_meta.base_meta.meta_type == pyds.NVDSINFER_TENSOR_OUTPUT_META:
                    try:
                        tensor_meta = pyds.NvDsInferTensorMeta.cast(user_meta.user_meta_data)
                        #print(dir(tensor_meta))
                        #print(tensor_meta.network_info)
                        #print(tensor_meta.num_output_layers)
                        #print(tensor_meta.output_layers_info)
                    except  StopIteration:
                        break
                    layer = pyds.get_nvds_LayerInfo(tensor_meta, 0)
                    ptr = ctypes.cast(pyds.get_ptr(layer.buffer), ctypes.POINTER(ctypes.c_float))
                    features = np.ctypeslib.as_array(ptr, shape=(4,))
                    labels = ['green', 'none', 'red', 'yellow']
                    max_index = np.argmax(features)
                    #print(len(array1))
                    print("Source = {} Frame Number = {} label = {}".format(array1[j], frame_number, labels[max_index]))

                    j +=1

                l_user=l_user.next       
        try:
            l_frame = l_frame.next
        except StopIteration:
            break    
    return Gst.PadProbeReturn.OK

So the problem i am facing is that my source_id for the label doesn’t match with their actual source id.

I tried to see all the objects in the batch_user_meta_list but i didn’t find the source_id associated with that meta anywhere.

So how can i get the correct label for the correct source_id???

Your idea and help will be very crucial. Thank you in advance!

Please fill in the the relevant information first
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

And what do you mean by the following my source_id for the label doesn’t match with their actual source id.?

By source_id i mean the camera source id.

Suppose i have 3 sources that goes through the preprocess element out of 6 source input to my streammux. I configured the pgie1 to process on the tensor output of these 3 sources which will result 3 different classification output (1 of each source). So now how do i associate the label generated to the specific source that resulted the output. Because the output label is not in fixed order and the order changes in every batch for the sources.

I hope this will help you to understand. Sorry for the confusion.

Please let me know if you want further details.

The source_id comes from NvDsFrameMeta. The NvDsLabelInfo comes from NvDsClassifierMeta which comes from NvDsFrameMeta too. You can refer to our source code to learn the relationship of these strutures.

opt\nvidia\deepstream\deepstream\sources\includes\nvdsmeta.h

This is true if i dont use “output-tensor-meta” = 1 in pgie1. But since i am using the preprocess element, the developer guide requested me to use the property “output-tensor-meta” =1 at the runtime. When doing so, i get a NvDsUserMeta in Batch meta and my source id is in FrameMeta. That is why i am facing this issue. For normal case without preprocess element, it was straight forward like you mentioned which i am aware of.

Regards

Why do you set the output-tensor-meta? It’s used for postprocess. Could you use this demo to reproduce your problem? deepstream-preprocess-test

I cannot reproduce the issue exactly with demo. In the demo the pgie is used as detector, but i an using my pgie as a classifier. So when i use pgie as a classifier, there is no any object in object_meta_list in frame meta. So I had to user output tensor meta to get my pgie data. How can this be solved ??

As your pipeline attached, …pgie->pgie…, are these 2 infers both classfifier? Could you describe your pipeline in detail?
When you use pgie as a classifier, you can refer to the link below:
https://forums.developer.nvidia.com/t/deepstream-sdk-faq/80236/25

Let me provide you more details, I have a preprocess before pgie1 which is classifier, The reason i am using preprocess is because my classifier(pgie1) need to operate on a specific reason (which is traffic light classifier, so need to only look at the ROI that has traffic light to identify- red, green and yellow). The pgie2 however is the detector(in my case yolov4 detector) to identify vehicles in the whole frame. So to identify a violation, i need the output of my pgie1 classification which is the color of traffic light. So in case of multiple sources, i need to know which source is having a redlight or which source is having green light. Then i will detect vehicles from pgie2 and track them using the tracker after that.

I have already figured out all the configurations. I just need to extract my metadata after pgie1, so that i know which source has which light.

Please let me know if you still cannot figure out what i am asking.

OK. When you create the nvstreamux, you can use the camera id to request the src pad. The source id is corresponded to the src pad id.

for i in range(number_sources):
        print("Creating source_bin ", i, " \n ")
        uri_name = args[i]
        if uri_name.find("rtsp://") == 0:
            is_live = True
        source_bin = create_source_bin(i, uri_name)
        if not source_bin:
            sys.stderr.write("Unable to create source bin \n")
        pipeline.add(source_bin)
        g_source_bin_list[i] = source_bin
        padname = "sink_%u" % i
        sinkpad = streammux.get_request_pad(padname)
        if not sinkpad:
            sys.stderr.write("Unable to create sink pad bin \n")
        srcpad = source_bin.get_static_pad("src")
        if not srcpad:
            sys.stderr.write("Unable to create src pad bin \n")
        srcpad.link(sinkpad)
    if is_live:
        print("At least one of the sources is live")
        streammux.set_property('live-source', 1)
def create_source_bin(index, uri):
    #with execution_lock:
    global g_source_id_list
    g_source_id_list[index] = index
    print("Creating source bin")

    # Create a source GstBin to abstract this bin's content from the rest of the
    # pipeline
    bin_name = "source-bin-%02d" % index
    print(bin_name)
    nbin = Gst.Bin.new(bin_name)
    if not nbin:
        sys.stderr.write(" Unable to create source bin \n")

    # Source element for reading from the uri.
    # We will use decodebin and let it figure out the container format of the
    # stream and the codec and plug the appropriate demux and decode plugins.
    if file_loop:
        # use nvurisrcbin to enable file-loop
        uri_decode_bin=Gst.ElementFactory.make("nvurisrcbin", "uri-decode-bin")
        uri_decode_bin.set_property("file-loop", 1)
        uri_decode_bin.set_property("cudadec-memtype", 0)
    else:
        uri_decode_bin = Gst.ElementFactory.make("uridecodebin", "uri-decode-bin")
    if not uri_decode_bin:
        sys.stderr.write(" Unable to create uri decode bin \n")
    # We set the input uri to the source element
    uri_decode_bin.set_property("uri", uri)
    # Connect to the "pad-added" signal of the decodebin which generates a
    # callback once a new pad for raw data has beed created by the decodebin
    uri_decode_bin.connect("pad-added", cb_newpad, nbin)
    uri_decode_bin.connect("child-added", decodebin_child_added, nbin)

    # We need to create a ghost pad for the source bin which will act as a proxy
    # for the video decoder src pad. The ghost pad will not have a target right
    # now. Once the decode bin creates the video decoder and generates the
    # cb_newpad callback, we will set the ghost pad target to the video decoder
    # src pad.
    Gst.Bin.add(nbin, uri_decode_bin)
    bin_pad = nbin.add_pad(Gst.GhostPad.new_no_target("src", Gst.PadDirection.SRC))
    if not bin_pad:
        sys.stderr.write(" Failed to add ghost pad in source bin \n")
        return None
    #Set status of the source to enabled
    g_source_enabled[index] = True
    return nbin

where,
number_sources = len(args)

I have added the camera id from no. of sources to request the src pad.

Is this code taking the args in random order than in the sorted way how args are passed? Is this the area i need to further improve my logic?

Can you provide me some example that i can refer to for how to access the respective src id in the downstream app?

When i see the process of creation of source bin

Creating streamux

Creating source_bin 0

file:///opt/nvidia/deepstream/deepstream-6.3/sources/apps/my_app/rl01-ov.mp4
Creating source bin
source-bin-00
Creating source_bin 1

file:///opt/nvidia/deepstream/deepstream-6.3/sources/apps/my_app/rl01-hd-new.mp4
Creating source bin
source-bin-01
Creating source_bin 2

file:///opt/nvidia/deepstream/deepstream-6.3/sources/apps/my_app/rl01-10-ov.mp4
Creating source bin
source-bin-02
Creating source_bin 3

file:///opt/nvidia/deepstream/deepstream-6.3/sources/apps/my_app/rl01-10-hd-new.mp4
Creating source bin
source-bin-03
Creating source_bin 4

file:///opt/nvidia/deepstream/deepstream-6.3/sources/apps/my_app/rl01-29-ov.mp4
Creating source bin
source-bin-04
Creating source_bin 5

file:///opt/nvidia/deepstream/deepstream-6.3/sources/apps/my_app/rl01-29-hd-new.mp4
Creating source bin
source-bin-05

They are appearing in the order as i provided input. So now the problem is how can i access this after my pgie1 (a classifier model that only acts on sources defined on my preprocess) performs classification to know which source is the label associated to?

You need to set this pad number to your camera id.

Thank you for the help. I am bit confused here! Is there any reference for to set the camera id. Also which meta will this camera id be part of?

I got this in GSTSTREAMMUX document:

This is my batchmeta:

[‘class’, ‘delattr’, ‘dir’, ‘doc’, ‘eq’, ‘format’, ‘ge’, ‘getattribute’, ‘gt’, ‘hash’, ‘init’, ‘init_subclass’, ‘le’, ‘lt’, ‘module’, ‘ne’, ‘new’, ‘reduce’, ‘reduce_ex’, ‘repr’, ‘setattr’, ‘sizeof’, ‘str’, ‘subclasshook’, ‘base_meta’, ‘batch_user_meta_list’, ‘cast’, ‘classifier_meta_pool’, ‘display_meta_pool’, ‘frame_meta_list’, ‘frame_meta_pool’, ‘label_info_meta_pool’, ‘max_frames_in_batch’, ‘meta_mutex’, ‘misc_batch_info’, ‘num_frames_in_batch’, ‘obj_meta_pool’, ‘reserved’, ‘user_meta_pool’]

So which metadata has the pad info?

This is the funtion of the Gstreamer itself. Please refer to the linke:gst_caps_get_structure