How to ensure that the metadata from two PGIE engines is detected in the sink?

Please provide complete information as applicable to your setup.

• DeepStream Version: 7.0
• NVIDIA GPU Driver Version: 550.67
• Issue Type: Question

Question

I’m trying to introduce a second PGIE engine to my python-based pipeline. It’s been partially working so far, and I am able to run one PGIE engine at a time, with the correct unique_component_id being picked up depending on which object is running. However, if I try to run both PGIEs at the same time, only the first object is picked up, and no matter what I’ve changed the “l_obj.next” is always of None type.

Is there a configuration or set-up I could have missed? I’ve provided two excerpts and I’m happy to provide more if needed.

Pipeline Excerpt

    l_obj = frame_meta.obj_meta_list
    while l_obj is not None:
        try:
            obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)
        except StopIteration:
            continue

        if obj_meta.unique_component_id == 1:
            print("Object 1 Detected\n")
            handle_pgie_1_metadata(obj_meta, batch_meta, frame_number, datetime_right_now, stream_id, frame_meta)
        # Handle metadata from the second inference engine
        elif obj_meta.unique_component_id == 2
            print("Object 2 Detected\n")
            handle_pgie_2_metadata(obj_meta, batch_meta, frame_number, datetime_right_now, stream_id, frame_meta)

        try:
            l_obj = l_obj.next
        except StopIteration:
            break
    try:
        l_frame = l_frame.next
    except StopIteration:
        break

return Gst.PadProbeReturn.OK

Probe Excerpt

   l_obj = frame_meta.obj_meta_list
    while l_obj is not None:
        try:
            obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)
        except StopIteration:
            continue

        draw_bounding_box(obj_meta)

        if obj_meta.unique_component_id == 1:
            print("Object 1 Detected\n")
            handle_pgie_1_metadata(obj_meta, batch_meta, frame_number, datetime_right_now, stream_id, frame_meta)
        # Handle metadata from the second inference engine
        elif obj_meta.unique_component_id == 2
            print("Object 2 Detected\n")
            handle_pgie_2_metadata(obj_meta, batch_meta, frame_number, datetime_right_now, stream_id, frame_meta)

        try:
            l_obj = l_obj.next
        except StopIteration:
            break
    try:
        l_frame = l_frame.next
    except StopIteration:
        break

return Gst.PadProbeReturn.OK
  1. could you share the whole pipeline? which plugin did you add the probe function on?
  2. if only test pgie 1, are there any detected objects? if only test pgie 2, are there any detected objects?
  3. what are the two models used to do? is there any dependencies? from example, the first model detects people, and the second model detects face, the second model is based on the outputs of first model. if still can’t work, could you share the the two configuration files of nvinfer?