Cannot get secondary model to work on inference results of detector

• Hardware Platform Jetson
• DeepStream Version 6.3
• JetPack Version 5.1
• TensorRT Version 5.1

Hi, I’m trying to write an application that uses two models back-to-back (one of them is Yolo). It is necessary for my secondary model to do inference on Yolo detections. Moreover, after my secondary model is finished, I need to perform custom post processing. I have been following deepstream_test3 and deepstream_ssd_parser examples.

I add probe to secondary nvinfer engine, identical to the one in SSD example, from where I call my own function for post processing.

# Add a probe on the secondary-infer source pad to get inference output tensors
    sgie1srcpad = sgie1.get_static_pad("src")
    if not sgie1srcpad:
        sys.stderr.write(" Unable to get src pad of secondary infer \n")

    sgie1srcpad.add_probe(Gst.PadProbeType.BUFFER, sgie_src_pad_buffer_probe, 0)
def sgie_src_pad_buffer_probe(pad, info, u_data):
    gst_buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer ")
        return

    # Retrieve batch metadata from the gst_buffer
    # Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
    # C address of gst_buffer as input, which is obtained with hash(gst_buffer)
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    l_frame = batch_meta.frame_meta_list

    while l_frame is not None:
        try:
            # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
            # The casting also keeps ownership of the underlying memory
            # in the C code, so the Python garbage collector will leave
            # it alone.
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break

        l_user = frame_meta.frame_user_meta_list
        print('pre second while')
        while l_user is not None:
            print('second while!')
            try:
                # Note that l_user.data needs a cast to pyds.NvDsUserMeta
                # The casting also keeps ownership of the underlying memory
                # in the C code, so the Python garbage collector will leave
                # it alone.
                user_meta = pyds.NvDsUserMeta.cast(l_user.data)
            except StopIteration:
                break


            if (
                    user_meta.base_meta.meta_type
                    != pyds.NvDsMetaType.NVDSINFER_TENSOR_OUTPUT_META
            ):
                continue

            tensor_meta = pyds.NvDsInferTensorMeta.cast(user_meta.user_meta_data)

            layers_info = []

            for i in range(tensor_meta.num_output_layers):
                layer = pyds.get_nvds_LayerInfo(tensor_meta, i)
                layers_info.append(layer)

            frame_object_list = nvds_infer_parse_pose_tensors(
                layers_info
            )
            try:
                l_user = l_user.next
            except StopIteration:
                break

            #for frame_object in frame_object_list:
            #    add_obj_meta_to_frame(frame_object, batch_meta, frame_meta, label_names)

        try:
            # indicate inference is performed on the frame
            frame_meta.bInferDone = True
            l_frame = l_frame.next
        except StopIteration:
            break

    return Gst.PadProbeReturn.OK

def nvds_infer_parse_pose_tensors(output_layer_info):
    print(output_layer_info)
    dimensions = layer_finder(output_layer_info, "dimension")
    print(pyds.get_detections(dimensions.buffer, 0))

The problem I have is that when I do test on a video stream, there is no output from the second model (no NVDSINFER_TENSOR_OUTPUT_META in the user_meta_data)

This works when I set process-mode to 1 for secondary network, however I need to do inference on Yolo crops, not on the entire image.
What could be the problem?

Yolo config file:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
#0=RGB, 1=BGR
model-color-format=0
model-engine-file=/wd_ssd/deepstream_python_apps/apps/deepstream-test3/yolov4_trt.engine
labelfile-path=/wd_ssd/mcs/dataset/propetro.names
batch-size=1

0=FP32, 1=INT8, 2=FP16 mode

network-mode=2
num-detected-classes=8
gie-unique-id=1
network-type=0
is-classifier=0

0=Group Rectangles, 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)

cluster-mode=2
maintain-aspect-ratio=1
parse-bbox-func-name=NvDsInferParseCustomYoloV4
custom-lib-path=/wd_ssd/builds/yolo_deepstream/deepstream_yolo/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
#scaling-filter=0
#scaling-compute-hw=0

[class-attrs-all]
nms-iou-threshold=0.6
pre-cluster-threshold=0.4

Pose config file:

[property]
gpu-id=0
net-scale-factor=1
onnx-file=…/weights/pose_net_tensorrt/pose_net_nchw.onnx
model-engine-file=…/weights/pose_net_tensorrt/pose_net_nchw.onnx_b16_gpu0_fp16.engine
batch-size=1

0=FP32 and 1=INT8 mode 2=FP16

network-mode=2
input-object-min-width=224
input-object-min-height=224

process-mode: 2 - inferences on crops from primary detector, 1 - inferences on whole frame

process-mode=2

0 - RGB, 1 - BGR, 2 - GRAY

model-color-format=1
gie-unique-id=2
operate-on-gie-id=1
output-blob-names=dimension;x_alpha;x_confidence;y_alpha;y_confidence;z_alpha;z_confidence

0=Detector, 1=Classifier, 2=Segmentation, 3=Instance Segmentation, 100=Other

/opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream-infer-tensor-meta-test

network-type=100

Enable tensor metadata output

output-tensor-meta=true

Could you refer to our demo to compare your configuration files: deepstream-pose-classification?

Hi,

I have checked out bodypose config but I don’t see anything different from my config.

It seems to me NVDSINFER_TENSOR_OUTPUT_META isn’t attached to the frame_meta when I set process-mode=2

I have also tried editing original deepstream_test3.py example to add fake probe to car color classifier, just to output some text.
My code is almost identical to the SSD example, without doing any post processing:

def sgie2_src_fake_pad_probe(pad, info, u_data):
    gst_buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer ")
        return

    # Retrieve batch metadata from the gst_buffer
    # Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
    # C address of gst_buffer as input, which is obtained with hash(gst_buffer)
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    l_frame = batch_meta.frame_meta_list

    while l_frame is not None:
        try:
            # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
            # The casting also keeps ownership of the underlying memory
            # in the C code, so the Python garbage collector will leave
            # it alone.
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break

        l_user = frame_meta.frame_user_meta_list
        print('pre second while')
        while l_user is not None:
            print('second while!')
            try:
                # Note that l_user.data needs a cast to pyds.NvDsUserMeta
                # The casting also keeps ownership of the underlying memory
                # in the C code, so the Python garbage collector will leave
                # it alone.
                user_meta = pyds.NvDsUserMeta.cast(l_user.data)
            except StopIteration:
                break


            if (
                    user_meta.base_meta.meta_type
                    != pyds.NvDsMetaType.NVDSINFER_TENSOR_OUTPUT_META
            ):
                continue
            print('after if')
            tensor_meta = pyds.NvDsInferTensorMeta.cast(user_meta.user_meta_data)

            layers_info = []

            for i in range(tensor_meta.num_output_layers):
                layer = pyds.get_nvds_LayerInfo(tensor_meta, i)
                layers_info.append(layer)
            print('got layers info list')
            frame_object_list = ["test","dict"]
            try:
                l_user = l_user.next
            except StopIteration:
                break

            for frame_object in frame_object_list:
                add_obj_meta_to_frame(frame_object, batch_meta, frame_meta)

        try:
            # indicate inference is performed on the frame
            frame_meta.bInferDone = True
            l_frame = l_frame.next
        except StopIteration:
            break

    return Gst.PadProbeReturn.OK

The thing is, program never executes second while loop, but the program still displays car color as you can see, so the classifier is working correctly.

What is the correct way to add probe for post processing of secondary model?

Upon reading documentation, I stumbled upon this

When operating as primary GIE, NvDsInferTensorMeta is attached to each frame’s (each NvDsFrameMeta object’s) frame_user_meta_list. When operating as secondary GIE, NvDsInferTensorMeta is attached to each each NvDsObjectMeta object’s obj_user_meta_list.

But when I try to read from obj_user_meta_list, I get segmentation fault in my program:

    while l_frame is not None:
        try:
            # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
            # The casting also keeps ownership of the underlying memory
            # in the C code, so the Python garbage collector will leave
            # it alone.
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break

        l_obj = frame_meta.obj_meta_list
        user_meta = pyds.NvDsObjectMeta.cast(l_obj)
        if user_meta:
            print('object meta:', user_meta)
            while user_meta is not None:
                print('obj user meta list', user_meta.obj_user_meta_list)
                print('segmentation fault here?')
                print(pyds.NvDsUserMeta.cast(user_meta.obj_user_meta_list))

                try:
                    user_meta = pyds.NvDsUserMeta.cast(user_meta.obj_user_meta_list)
                except StopIteration:
                    break

                if (
                    user_meta.base_meta.meta_type
                    != pyds.NvDsMetaType.NVDSINFER_TENSOR_OUTPUT_META
                ):
                    continue

                tensor_meta = pyds.NvDsInferTensorMeta.cast(user_meta.user_meta_data)
                print('tensor meta!')
                layers_info = []

What is the correct way to read data from obj_user_meta_list?

You can refer to our demo code: deepstream_nvdsanalytics.py.

After reading this post I solved the issue like this:

        l_obj = frame_meta.obj_meta_list
        while l_obj is not None:
            try:
                obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)
            except StopIteration:
                break

            obj_usr_meta_list = obj_meta.obj_user_meta_list
            
            
            while obj_usr_meta_list is not None:

                try:
                    user_meta_list = pyds.NvDsUserMeta.cast(obj_usr_meta_list.data)
                except StopIteration:
                    break

                tensor_meta = pyds.NvDsInferTensorMeta.cast(user_meta_list.user_meta_data)

                layers_info = [] 

                for i in range(tensor_meta.num_output_layers):
                    layer = pyds.get_nvds_LayerInfo(tensor_meta, i)
                    layers_info.append(layer)

                nvds_infer_parse_tensors(layers_info)
                

                try:
                    obj_usr_meta_list = obj_usr_meta_list.next
                except StopIteration:
                    break
                
            try:
                l_obj=l_obj.next
            except StopIteration:
                break

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.