There is a confusing bug by using multiple "nvinfer" in parallel by "tee"

**• NVIDIA Jetson Orin NX Engineering Reference Developer Kit **
**• DeepStream 7.0 **
**• JetPack 6.0 [L4T 36.3.0] **
**• TensorRT 8.6.2.3 **

My source is a CSI camera. Now I am doing real-time inference calculations on image frames, using “tee+queue" for multi-way branching. In each branch, I use the inference model “nvinfer”(pgie) and then use “nvdsosd/nvsegvisual” to draw on the frame, and push them to the MJPEG stream for browser switching display.
Below is my pipeline diagram:

Pipeline linking code:
`

print("Linking elements in Pipeline")
src.link(conv)
conv.link(caps)
caps.link(tee)
tee.link(queue_full)
queue_full.link(encoder_full)
encoder_full.link(sink_full)
sink_full.connect('new-sample', on_buffer, sink_full)
#od
tee.link(queue_yolo_od)
sinkpad_yolo_od = streammux_yolo_od.get_request_pad('sink_0')
if not sinkpad_yolo_od:
    print('ERROR: Unable to get the sink pad of streammux_yolo_od')
    sys.exit(1)
srcpad_yolo_od = queue_yolo_od.get_static_pad('src')
if not srcpad_yolo_od:
    print('ERROR: Unable to get the src pad of queue_yolo_od')
    sys.exit(1)
if srcpad_yolo_od.link(sinkpad_yolo_od) != Gst.PadLinkReturn.OK:
    print('ERROR: Could not link queue_yolo_od to streammux_yolo_od sink_0')
    sys.exit(1)
if not streammux_yolo_od.link(nvinfer_yolo_od):
    print('ERROR: Could not link streammux_yolo_od to nvinfer_yolo_od')
    sys.exit(1)
nvinfer_yolo_od.link(nvdsosd_yolo_od)
nvdsosd_yolo_od.link(encoder_yolo_od)
encoder_yolo_od.link(sink_yolo_od)
sink_yolo_od.connect('new-sample', on_buffer, sink_yolo_od)

#pose
tee.link(queue_yolo_pose)
sinkpad_yolo_pose = streammux_yolo_pose.get_request_pad('sink_0')
if not sinkpad_yolo_pose:
    print('ERROR: Unable to get the sink pad of streammux_yolo_pose')
    sys.exit(1)
srcpad_yolo_pose = queue_yolo_pose.get_static_pad('src')
if srcpad_yolo_pose.link(sinkpad_yolo_pose) != Gst.PadLinkReturn.OK:
    print('ERROR: Could not link queue_yolo_pose to streammux_yolo_pose sink_0')
    sys.exit(1)
streammux_yolo_pose.link(nvinfer_yolo_pose)
nvinfer_yolo_pose.link(nvdsosd_yolo_pose)
nvdsosd_yolo_pose.link(encoder_yolo_pose)
encoder_yolo_pose.link(sink_yolo_pose)
osd_sink_pad = nvinfer_yolo_pose.get_static_pad("src")
if not osd_sink_pad:
    sys.stderr.write("Unable to get sink pad of nvdsosd_yolo_pose\n")
    sys.exit(1)
else:
    osd_sink_pad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 1)
sink_yolo_pose.connect('new-sample', on_buffer, sink_yolo_pose)

#face
tee.link(queue_yolo_face)
sinkpad_yolo_face = streammux_yolo_face.get_request_pad('sink_0')
srcpad = queue_yolo_face.get_static_pad('src')
if srcpad.link(sinkpad_yolo_face) != Gst.PadLinkReturn.OK:
    print('ERROR: Could not link queue_yolo_face to streammux_yolo_face sink_0')
    sys.exit(1)
streammux_yolo_face.link(nvinfer_yolo_face)
nvinfer_yolo_face.link(nvdsosd_yolo_face)
nvdsosd_yolo_face.link(encoder_yolo_face)
encoder_yolo_face.link(sink_yolo_face)
sink_yolo_face.connect('new-sample', on_buffer, sink_yolo_face)

#seg
tee.link(queue_yolo_seg)
sinkpad_yolo_seg = streammux_yolo_seg.get_request_pad('sink_0')
srcpad_yolo_seg = queue_yolo_seg.get_static_pad('src')
if srcpad_yolo_seg.link(sinkpad_yolo_seg) != Gst.PadLinkReturn.OK:
    print('ERROR: Could not link queue_yolo_seg to streammux_yolo_seg sink_0')
    sys.exit(1)
streammux_yolo_seg.link(nvinfer_yolo_seg)
nvinfer_yolo_seg.link(nvdsosd_yolo_seg)
nvdsosd_yolo_seg.link(encoder_yolo_seg)
encoder_yolo_seg.link(sink_yolo_seg)
sink_yolo_seg.connect('new-sample', on_buffer, sink_yolo_seg)

#unet
tee.link(queue_unet)
sinkpad_unet = streammux_unet.request_pad_simple("sink_0")
if not sinkpad_unet:
    sys.stderr.write(" Unable to get the sink pad of streammux_unet \n")
srcpad_unet = queue_unet.get_static_pad("src")
if not srcpad_unet:
    sys.stderr.write(" Unable to get source pad of queue_unet_unet \n")
srcpad_unet.link(sinkpad_unet)
streammux_unet.link(nvvidconv_unet)
nvvidconv_unet.link(seg_unet)
if not seg_unet.link(nvsegvisual_unet):
    sys.stderr.write("Failed to seg.link(nvsegvisual) \n")
if not nvsegvisual_unet.link(nvvidconv_post_visual_unet):
    sys.stderr.write("Failed to link nvsegvisual to nvvidconv_post_visual \n")
if not nvvidconv_post_visual_unet.link(scale_caps_unet):
    sys.stderr.write("Failed to link nvvidconv_post_visual to video_scale \n")
if not scale_caps_unet.link(encoder_unet):
    sys.stderr.write("Failed to link scale_caps_unet to encoder_unet \n")    
if not encoder_unet.link(sink_unet):
    sys.stderr.write("Failed to link encoder_unet to sink \n")

# Lets add probe to get informed of the meta data generated, we add probe to
# the src pad of the inference element
seg_src_pad_unet = seg_unet.get_static_pad("src")
if not seg_src_pad_unet:
    sys.stderr.write(" Unable to get src pad \n")
else:
    seg_src_pad_unet.add_probe(Gst.PadProbeType.BUFFER,seg_src_pad_buffer_probe, 0)
sink_unet.connect('new-sample', on_buffer, sink_unet)

`

The problem I encountered is: I found it in the output image. For example, in the output yolo-pose inference result stream, I found yolo-od/yolo-seg/yolo-face and other results drawn together in the yolo-pose stream. Similarly, in the yolo-seg image stream that should show the actual segmentation drawing results, the inference result drawing of yolo-pose/yolo-od/yolo-face appeared, and so on.

1.The following figure shows the mjpeg stream of the yolo target detection, but the result of instance segmentation appears in the picture

2.The following figure shows the mjpeg stream of yolo-pose, but the result of yolo-instance segmentation appears in the picture.

I tried another way, creating the part of the camera that captures the image as “appsrc”, and copying appsrc into multiple ones, then creating multiple pipelines at the same time, and connecting the inference model in different pipelines through the copies of appsrc. This way, I can use one camera resource at the same time but run multiple pipelines at the same time (each pipeline has an inference model and a sink to output mjpeg stream) to solve the above confusing bugs. However, it didn’t work, and this architecture still caused the same confusing bugs as the “tee” architecture.

Why does this confusing bug occur and how can I fix it?
I need help, looking forward to your reply.

Can you upload a clearer image of the pipeline?

Why is it not clear when uploaded?

Could you try to compress it before uploading?

pipeline.pdf (787.6 KB)

I uploaded a pdf, please take a look。

Could you try to add nvvideoconvert after each nvstreammux and set the disable-passthrough=1 fot that?

1 Like

OMG ! you solved this messed up bug. I was stuck on this for a few days, thank you, what a great engineer you are~~. Thank you so much. But I still don’t know what caused this bug. Could you tell me what caused this bug and why it was solved this way?

The tee plugin does not deep-copy the raw data. All the branches use the same gstbuffer. So if you want to do some special process separately, you can use nvvideoconvert for the deep-copy.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.