How to access tensormeta information for 1pgie and 2sgie

Please provide complete information as applicable to your setup.

**• Hardware Platform---------> GPU
**• DeepStream Version------->6.1.1
• JetPack Version (valid for Jetson only)
**• TensorRT Version------>8.4.0
**• NVIDIA GPU Driver Version ---->525.
• Issue Type( questions, new requirements, bugs)

Hola !

I want to access tensormeta for pgie and as well as sgie. I’m using “PYTHON”
as a language. my pipeline as sample as of now I have pgie1------>sgie1------>sgie2 ----->

How I can access for both 3 model tensormeta

I know for pgie we can get ----> frame_user_meta_list
and for sgie we can get -------> obj_user_meta_list

uri-decode-bin------->streamux—>pgie1—>sgie1----->sgie2---->tiler—>osd---->capsfilter—>sink

I’m taking the pad from osd .

  1. When I do for pgie I able to see the output layer
    but when I’m trying pgie and 1sgie is coming only for pgie not for sgie

I only want to access tensormeta for sgie1 and sgie2 I only getting for sgie1 not for sgie 2

I tried to filter out my tensormeta.unique_id
In tensormeta
It’s not coming for both

MY question is for both sgie model how I can access for both tensormeta

“”"
l_user_meta = obj_meta.obj_user_meta_list
print(“l_user_meta”,l_user_meta)

        while l_user_meta:
            user_meta = pyds.NvDsUserMeta.cast(l_user_meta.data)
            # l_label = class_meta.label_info_list
            print("user_meta",user_meta)
            if(user_meta.base_meta.meta_type!= pyds.NvDsMetaType.NVDSINFER_TENSOR_OUTPUT_META):
                continue

            tensor_meta = pyds.NvDsInferTensorMeta.cast(user_meta.user_meta_data)

            # print("Unique id-------->tensor_--Meta: {}".format(tensor_meta.unique_id))

            # Boxes in the tensor meta should be in network resolution which is
            # found in tensor_meta.network_info. Use this info to scale boxes to
            # the input frame resolution.
            if tensor_meta.unique_id ==3:

                print("AAAAAAAAAAAAAAAAAAAAAAAAAAGGGGGGGGGGGGGEEEEEEEEEEEE")
                layers_info = []

                for i in range(tensor_meta.num_output_layers):
                    layer = pyds.get_nvds_LayerInfo(tensor_meta, i)
                    layers_info.append(layer)
                    print("layers name: {}".format(layer.layerName))
            if tensor_meta.unique_id ==2:
                print("GGGGGGGGGGGGEEEEEEEEEEEENNNNNDDDDDDDDDDEEEEEEEEEEEERRRRRRRRRRRR")

                layers_info = []

                for i in range(tensor_meta.num_output_layers):
                    layer = pyds.get_nvds_LayerInfo(tensor_meta, i)
                    layers_info.append(layer)
                    print("layers name: {}".format(layer.layerName))    

“”"

please refer to deepstream_test_2.py, which has a similar pipeline. is pgie detection modle? are sige1 ans sgie2 classification model?

1 Like

I want to access sgie1 and sgie2 !
you are correct both are classification model but i want to access both meta data and sent it to somewhere else But in test two you don’t mention How we can access meta data … I try to give you more information …

how for both model I can access tensormeta this is my question for 1 I’m able to get it

Please help me out about this !

nvinfer plugin will call nvds_add_classifier_meta_to_object to add classifcaiton meta to object meta, you need to get object meta first, then get classification meta, please refer to deepstream sample deepstream-preprocess-test’s pgie_src_pad_buffer_probe.

def osd_sink_pad_buffer_probe(pad,info,u_data):

#Intiallizing object counter with 0.
obj_counter = {
    PGIE_CLASS_ID_MASK:0,
    PGIE_CLASS_ID_FACE:0
}
num_rects=0
gst_buffer = info.get_buffer()
if not gst_buffer:
    print("Unable to get GstBuffer ")
    return
# Retrieve batch metadata from the gst_buffer
# Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
# C address of gst_buffer as input, which is obtained with hash(gst_buffer)
batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
# if not batch_meta:
#     return Gst.PadProbeReturn.OK
l_frame = batch_meta.frame_meta_list
while l_frame is not None:
    try:
        # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
        # The casting is done by pyds.NvDsFrameMeta.cast()
        # The casting also keeps ownership of the underlying memory
        # in the C code, so the Python garbage collector will leave
        # it alone.
        frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
    except StopIteration:
        continue

    
    # print("Frame Number is ", frame_meta.frame_num)
    # print("Source id is ", frame_meta.source_id)
    # print("Batch id is ", frame_meta.batch_id)
    # print("Source Frame Width ", frame_meta.source_frame_width)
    # print("Source Frame Height ", frame_meta.source_frame_height)
    # print("Num object meta ", frame_meta.num_obj_meta)
    
    frame_number = frame_meta.frame_num
    l_obj = frame_meta.obj_meta_list
    while l_obj is not None:
        try:
            obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)
            # print('obj_meta.class_id',obj_meta.class_id)
        except StopIteration:
            continue

        l_user_meta = obj_meta.obj_user_meta_list
        print("l_user_meta",l_user_meta)

        while l_user_meta:
            user_meta = pyds.NvDsUserMeta.cast(l_user_meta.data)
            # l_label = class_meta.label_info_list
            print("user_meta",user_meta)

            if(user_meta.base_meta.meta_type!= pyds.NvDsMetaType.NVDSINFER_TENSOR_OUTPUT_META):
                continue

            while user_meta:

                tensor_meta = pyds.NvDsInferTensorMeta.cast(user_meta.user_meta_data)

                while tensor_meta:

                    print("Unique id-------->tensor_--Meta: {}".format(tensor_meta.unique_id))

                    # Boxes in the tensor meta should be in network resolution which is
                    # found in tensor_meta.network_info. Use this info to scale boxes to
                    # the input frame resolution.

                    if tensor_meta.unique_id ==2:
                        print("GGGGGGGGGGGGEEEEEEEEEEEENNNNNDDDDDDDDDDEEEEEEEEEEEERRRRRRRRRRRR")

                        layers_info = []

                        for i in range(tensor_meta.num_output_layers):
                            layer = pyds.get_nvds_LayerInfo(tensor_meta, i)
                            layers_info.append(layer)
                            print("layers name: {}".format(layer.layerName))
                    
                    if tensor_meta.unique_id ==3:

                        print("AAAAAAAAAAAAAAAAAAAAAAAAAAGGGGGGGGGGGGGEEEEEEEEEEEE")
                        layers_info = []

                        for i in range(tensor_meta.num_output_layers):
                            layer = pyds.get_nvds_LayerInfo(tensor_meta, i)
                            layers_info.append(layer)
                            print("layers name: {}".format(layer.layerName))

        try:
            l_obj = l_obj.next
        except StopIteration:
            break
    try:
        l_frame = l_frame.next
    except StopIteration:
        break

#print("Frame Number =", frame_number, "Face Count =", obj_counter[PGIE_CLASS_ID_Face],)
return Gst.PadProbeReturn.OK

THIS IS MY FUNCTION WHAT I MAKE TO ACCESS TENSOR META FOR SGIE1 AND SGIE2

only sgie1 layer is printing (it’s unique id is 3 )
sgie2 (it’s unique id is 2 )

It’s not printing for sgie2

Look through the code and tell me where am making mistake !

please loop objectmeta 's classifier_meta_list, which is a list.

1 Like

Okey ! what I know ,“classifier_meta_list” will work on with out tensor meta

But when we put sgie_config_file —> output-tensor-meta=1

for this we can access tensor
for that obj_meta to

IF YOU go through TENSOR META
“”“”“”
The Gst-nvinfer plugin can attach raw output tensor data generated by a TensorRT inference engine as metadata. It is added as an NvDsInferTensorMeta in the frame_user_meta_list member of NvDsFrameMeta for primary (full frame) mode, or in the obj_user_meta_list member of NvDsObjectMeta for secondary (object) mode.

“”“”"
for that obj_meta.obj_user_meta_list
for each individual tensor

Please correct If I’m getting anything wrong

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

if output-tensor-meta is 1 you need to add classification meta yourself, please refer to \opt\nvidia\deepstream\deepstream-6.2\sources\apps\sample_apps\deepstream-infer-tensor-meta-test\deepstream_infer_tensor_meta_test.cpp

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.