Image inference dual model pipeline encountered duplicate objectmeta content

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): jetson orin NX
• DeepStream Version:6.3
• JetPack Version (valid for Jetson only):5.1.3

When I create the following pipeline and run it, Why do I add probe functions at the src pad of Metamux, and the output inference results are repeated? Why are the models configured for my two PGIE completely different?

    pipeline = gst_pipeline_new("cvmat-inference-pipeline");
    appsrc = gst_element_factory_make("appsrc", "app-source");
    nvvidconv = gst_element_factory_make("nvvidconv", "nvvidconv");
    streammux = gst_element_factory_make("nvstreammux", "streammux");
    tee = gst_element_factory_make("tee","tee");
    queue = gst_element_factory_make("queue","queue");
    pgie = gst_element_factory_make("nvinfer", "primary-inference");
    tracker = gst_element_factory_make ("nvtracker", "tracking-tracker");
    queue1 = gst_element_factory_make("queue","queue1");
    pgie1 = gst_element_factory_make("nvinfer", "primary-inference1");
    tracker1 = gst_element_factory_make ("nvtracker", "tracking-tracker1");
    metamux = gst_element_factory_make ("nvdsmetamux", "metamux");
    sink = gst_element_factory_make("fakesink", "fakesink");

pipelinetest.pdf (36.4 KB)

Program error as shown in the figure:

This is normal for your pipeline. For DeepStream, a GstBuffer is simply a handle to an nvbufsurface.

TEE doesn’t copy the batch, and for Yolov8/Yolov5 instance masks, they operate on the same nvbufsurface.

If you need parallel inference, please refer to this sample.

Or you can describe your requirements and ask for alternative solutions.

I have tried to modify my pipeline, as shown in the attachment. Can this avoid the problem of handling the same objectmeta during parallel inference of the model?

pipelinetest.pdf (36.3 KB)

This pipeline is basically the same as the previous process. Please refer to the reply above.

1. Sorry, the pipeline file for the question I initially raised seems to be incorrect. I followed it with TEE after Streammux and then processed it in parallel with two models.
The pipeline I modified was followed by two streammux after tee, and then model processed separately;

**2.**What I understand is that NvDsBatchMeta data is created after the Streammux plugin and processed in parallel without affecting each other. Is that correct for my understanding?

**3.**I developed a multi model inference pipeline based on this example. I don’t have multiple source inputs, I just want to achieve fast parallel inference of images, without considering serialization, because efficiency needs to be taken into account. Do you have any suggestions? How should I handle it?

This pipeline works, but you shouldn’t use metamux, which merges different source IDs/meta data. For your pipeline, I think nvinfer → nvinfer can also achieve the goal.

gst-launch-1.0 nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264  ! tee name=t \
               t.src_0 ! mux.sink_0 nvstreammux name=mux batch-size=1 width=1920 height=1080 ! nvinfer batch-size=1 config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt ! nvvideoconvert ! "video/x-raw(memory:NVMM),format=RGBA" ! nvdsosd ! nv3dsink \
               t.src_1 ! mux1.sink_0 nvstreammux name=mux1 batch-size=1 width=1920 height=1080 ! nvinfer batch-size=1 config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary1.txt ! nvvideoconvert ! "video/x-raw(memory:NVMM),format=RGBA" ! nvdsosd ! nv3dsink 

What is your intention? Why do you need a detector and instance segmentation to work together?

It is a business requirement, hoping to enable these two models to process in parallel and reason as quickly as possible, but may I ask why this pipeline cannot use Metamux? Isn’t the function of metamux to synchronize analysis results

metamux is only used to merge meta from multiple sources, but your pipeline only has one source

It should be possible to achieve this function by modifying the configuration file of Metamux.

About parallel infer and tee, please refer to this topic.

Tee does not mean copy, “tee” just shares the batched buffer between branches. If you want to use parallel inference and metamux at the same time, please refer to the pipeline below.

gst-launch-1.0 uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! mux.sink_0 \
               nvstreammux name=mux gpu-id=0 batch-size=1 width=1920 height=1080 ! queue ! tee name=t0 \
               t0.src_0 ! queue ! meta.sink_0 \
               t0.src_1 ! queue ! nvstreamdemux name=demux per-stream-eos=true  \
               demux.src_0 ! queue ! tee name=t \
               t.src_0 ! queue ! b0_m.sink_0 nvstreammux name=b0_m batch-size=1 width=1920 height=1080 ! queue ! nvinfer batch-size=1 config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt ! queue ! meta.sink_1 \
               t.src_1 ! queue ! b1_m.sink_1 nvstreammux name=b1_m batch-size=1 width=1920 height=1080 ! queue ! nvinfer batch-size=1 config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary1.txt ! queue ! meta.sink_2 \
               nvdsmetamux name=meta config-file=config_metamux0.txt ! queue ! nvvideoconvert ! nvdsosd ! nv3dsink

May I ask if this operation is feasible for me

No, the root cause in the above link.

I think you can give it a try. I’ve tried it myself and it’s possible

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.