[DeepStream 6.0][PYTHON] When using tee, osd boxes in demuxed output are shown with position and proportions of tiled output

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): GTX 1650
• DeepStream Version: 6.0
• TensorRT Version: 8.0.1
• NVIDIA GPU Driver Version (valid for GPU only): 510.47.03
• Issue Type( questions, new requirements, bugs): bug/questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
I’m trying to use two branches with different OSD elements and tried to do so mimicking deepstream_app.c pipeline in python, but I’m facing a problem with boxes positions and coordinates.
Python bindings are required, since this problem happens with python code.
I’ll be posting a code in the end, but this is what is needed to reproduce with python bindings.
1. Use a tee element to create two branches
2. On one display put a tiled display with OSD
3. On the other branch put a demuxed display, with a source that is beeing shown in the tiled
4. See that sometimes, for some frames, the boxes in the demuxed sink appear in the position they are in the tiled display with corresponding proportions instead of the correct position and proportions for the demuxed sink.


I had this issue with DeepStream 5.0 too and hoped it would be fixed once I updated to 6.0, but it still happens.

Sample Code:

from common.bus_call import bus_call

from ctypes import *
from gi.repository import GObject, Gst
import sys
import gi

gi.require_version('Gst', '1.0')


def cb_newpad(decodebin, decoder_src_pad, data):
    print("In cb_newpad\n")
    caps = decoder_src_pad.get_current_caps()
    gststruct = caps.get_structure(0)
    gstname = gststruct.get_name()
    source_bin = data
    features = caps.get_features(0)

    print("gstname=", gstname)
    if(gstname.find("video") != -1):
        print("features=", features)
        if features.contains("memory:NVMM"):
            bin_ghost_pad = source_bin.get_static_pad("src")
            if not bin_ghost_pad.set_target(decoder_src_pad):
                sys.stderr.write(
                    "Failed to link decoder src pad to source bin ghost pad\n")
        else:
            sys.stderr.write(
                " Error: Decodebin did not pick nvidia decoder plugin.\n")


def decodebin_child_added(child_proxy, Object, name, user_data):
    print("Decodebin child added:", name, "\n")
    if(name.find("decodebin") != -1):
        Object.connect("child-added", decodebin_child_added, user_data)


def create_source_bin(index, uri):
    print("Creating source bin")

    bin_name = "source-bin-%02d" % index
    print(bin_name)
    nbin = Gst.Bin.new(bin_name)
    if not nbin:
        sys.stderr.write(" Unable to create source bin \n")

    uri_decode_bin = Gst.ElementFactory.make("uridecodebin", "uri-decode-bin")
    if not uri_decode_bin:
        sys.stderr.write(" Unable to create uri decode bin \n")

    uri_decode_bin.set_property("uri", uri)

    uri_decode_bin.connect("pad-added", cb_newpad, nbin)
    uri_decode_bin.connect("child-added", decodebin_child_added, nbin)

    Gst.Bin.add(nbin, uri_decode_bin)
    bin_pad = nbin.add_pad(
        Gst.GhostPad.new_no_target("src", Gst.PadDirection.SRC))
    if not bin_pad:
        sys.stderr.write(" Failed to add ghost pad in source bin \n")
        return None
    return nbin


def main():
    srcs = ["file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4"]

    number_sources = len(srcs)

    GObject.threads_init()
    Gst.init(None)

    print("Creating Pipeline \n ")
    pipeline = Gst.Pipeline()
    is_live = False

    if not pipeline:
        sys.stderr.write(" Unable to create Pipeline \n")
    print("Creating streamux \n ")

    streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
    if not streammux:
        sys.stderr.write(" Unable to create NvStreamMux \n")

    pipeline.add(streammux)
    for i in range(number_sources):
        print("Creating source_bin ", i, " \n ")
        uri_name = srcs[i]
        if uri_name.find("rtsp://") == 0 :
            is_live = True
        source_bin = create_source_bin(i, uri_name)
        if not source_bin:
            sys.stderr.write("Unable to create source bin \n")
        pipeline.add(source_bin)
        padname = "sink_%u" % i
        sinkpad = streammux.get_request_pad(padname)
        if not sinkpad:
            sys.stderr.write("Unable to create sink pad bin \n")
        srcpad = source_bin.get_static_pad("src")
        if not srcpad:
            sys.stderr.write("Unable to create src pad bin \n")
        srcpad.link(sinkpad)

    queue1 = Gst.ElementFactory.make("queue", "queue1")
    queue2 = Gst.ElementFactory.make("queue", "queue2")
    queue3 = Gst.ElementFactory.make("queue", "queue3")
    queue4 = Gst.ElementFactory.make("queue", "queue4")
    queue5 = Gst.ElementFactory.make("queue", "queue5")
    queue6 = Gst.ElementFactory.make("queue", "queue6")
    queue7 = Gst.ElementFactory.make("queue", "queue7")
    queue8 = Gst.ElementFactory.make("queue", "queue8")
    queue9 = Gst.ElementFactory.make("queue", "queue9")
    pipeline.add(queue1)
    pipeline.add(queue2)
    pipeline.add(queue3)
    pipeline.add(queue4)
    pipeline.add(queue5)
    pipeline.add(queue6)
    pipeline.add(queue7)
    pipeline.add(queue8)
    pipeline.add(queue9)

    print("Creating Pgie \n ")
    pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
    if not pgie:
        sys.stderr.write(" Unable to create pgie \n")

    print("Creating tee \n ")
    tee = Gst.ElementFactory.make("tee", "tee")
    if not tee:
        sys.stderr.write(" Unable to create tee \n")

    print("Creating tiler \n ")
    tiler = Gst.ElementFactory.make("nvmultistreamtiler", "nvtiler")
    if not tiler:
        sys.stderr.write(" Unable to create tiler \n")

    print("Creating nvvidconv for tiler \n ")
    nvvidconv_tiler = Gst.ElementFactory.make("nvvideoconvert", "convertor-tiler")
    if not nvvidconv_tiler:
        sys.stderr.write(" Unable to create nvvidconv_tiler \n")

    print("Creating nvosd \n ")
    nvosd_tiler = Gst.ElementFactory.make("nvdsosd", "onscreendisplay-tiler")
    if not nvosd_tiler:
        sys.stderr.write(" Unable to create nvosd_tiler \n")
    nvosd_tiler.set_property('process-mode', 0)
    nvosd_tiler.set_property('display-text', 1)

    print("Creating demuxer \n ")
    demuxer = Gst.ElementFactory.make("nvstreamdemux", "demuxer")
    if not demuxer:
        sys.stderr.write(" Unable to create nvstreamdemux \n")

    print("Creating nvvidconv for demuxer \n ")
    nvvidconv_demuxer = Gst.ElementFactory.make("nvvideoconvert", "convertor-demuxer")
    if not nvvidconv_demuxer:
        sys.stderr.write(" Unable to create nvvidconv_demuxer \n")

    print("Creating nvosd \n ")
    nvosd_demuxer = Gst.ElementFactory.make("nvdsosd", "onscreendisplay-demuxer")
    if not nvosd_demuxer:
        sys.stderr.write(" Unable to create nvosd_demuxer \n")
    nvosd_demuxer.set_property('process-mode', 0)
    nvosd_demuxer.set_property('display-text', 1)

    print("Creating EGLSink for tiler \n")
    sink_tiler = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer-tiler")
    if not sink_tiler:
        sys.stderr.write(" Unable to create egl sink \n")

    print("Creating EGLSink for demuxer \n")
    sink_demuxer = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer-demuxer")
    if not sink_demuxer:
        sys.stderr.write(" Unable to create egl sink \n")

    if is_live:
        print("Atleast one of the sources is live")
        streammux.set_property('live-source', 1)

    streammux.set_property('width', 1920)
    streammux.set_property('height', 1080)
    streammux.set_property('batch-size', number_sources)
    streammux.set_property('batched-push-timeout', 4000000)

    pgie.set_property('config-file-path', "/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt")
    pgie_batch_size = pgie.get_property("batch-size")
    if(pgie_batch_size != number_sources):
        print("WARNING: Overriding infer-config batch-size",
              pgie_batch_size, " with number of sources ", number_sources, " \n")
        pgie.set_property("batch-size", number_sources)

    tiler_rows = 2
    tiler_columns = 2
    tiler.set_property("rows", tiler_rows)
    tiler.set_property("columns", tiler_columns)
    tiler.set_property("width", 1920)
    tiler.set_property("height", 1080)

    tiler.set_property("show-source", -1)

    sink_tiler.set_property("qos", 0)
    sink_demuxer.set_property("qos", 0)

    print("Adding elements to Pipeline \n")
    pipeline.add(pgie)
    pipeline.add(tee)
    pipeline.add(tiler)
    pipeline.add(nvvidconv_tiler)
    pipeline.add(nvosd_tiler)
    pipeline.add(sink_tiler)
    pipeline.add(demuxer)
    pipeline.add(nvvidconv_demuxer)
    pipeline.add(nvosd_demuxer)
    pipeline.add(sink_demuxer)

    print("Linking elements in the Pipeline \n")
    streammux.link(queue1)
    queue1.link(pgie)
    pgie.link(tee)

    sink_pad_queue2 = queue2.get_static_pad("sink")
    tee_tiler_pad = tee.get_request_pad('src_%u')
    tee_demuxed_pad = tee.get_request_pad("src_%u")
    if not tee_tiler_pad or not tee_demuxed_pad:
        sys.stderr.write("Unable to get request pads\n")
    tee_tiler_pad.link(sink_pad_queue2)

    sink_pad_queue3 = queue3.get_static_pad("sink")
    tee_demuxed_pad.link(sink_pad_queue3)

    queue2.link(tiler)
    tiler.link(queue4)
    queue4.link(nvvidconv_tiler)
    nvvidconv_tiler.link(queue5)
    queue5.link(nvosd_tiler)
    nvosd_tiler.link(queue6)
    queue6.link(sink_tiler)

    queue3.link(demuxer)
    src_pad_demuxer = demuxer.get_request_pad("src_00")
    sink_pad_queue7 = queue7.get_static_pad("sink")
    if not src_pad_demuxer or not sink_pad_queue7:
        sys.stderr.write("Unable to get request pads for demuxer\n")
    src_pad_demuxer.link(sink_pad_queue7)
    queue7.link(nvvidconv_demuxer)
    nvvidconv_demuxer.link(queue8)
    queue8.link(nvosd_demuxer)
    nvosd_demuxer.link(queue9)
    queue9.link(sink_demuxer)

    loop = GObject.MainLoop()
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect("message", bus_call, loop)

    print("Now playing...")
    for i, source in enumerate(srcs):
        if (i != 0):
            print(i, ": ", source)

    print("Starting pipeline \n")

    pipeline.set_state(Gst.State.PLAYING)
    try:
        loop.run()
    except:
        pass

    print("Exiting app\n")
    pipeline.set_state(Gst.State.NULL)


if __name__ == '__main__':
    sys.exit(main())

1 Like

Sorry for the late response, is this still an issue to support? Thanks

Hello. Thanks for the return. I’m still having this problem.

Since posting, I’ve made a few tests.
Testing with sample_qHD.h264, sample_720p.h264, sample_720p.mjpeg, sample_ride_bike.mov and sample_run.mov the boxes were rendered with correct positions and proportions.
However, the problem still occurs with sample_720p.mp4, sample_1080p_h265.mp4 , sample_1080p_h264.mp4, sample_qHD.mp4 and my personal .mp4 and .webm files.

I was guessing maybe the problem was with one of the bins added by uridecodebin, but .mov files and .mp4 files logged the exact same decodebin childs while mp4 files don’t work and mov files seem to work.
I haven’t tested with many mov files, so I can’t be sure they always work, but mp4 never worked in my tests.

I’m really confused about what could be causing this problem.

1 Like

Update:
I have made new tests using sample_720p.h264 in a more complex pipeline (adding a tracker, 3 secondary models and an analytics to the provided code) and the boxes problem happened with h264 too. Not nearly as bad as it was with mp4 file, but it happened.
Checking nvidia-smi -l 1, it’s not even reaching half of VRAM in worst case, but the problem seems to start happening after GPU usage exceeds 50%~60%.

Is it possible the two branches change the same meta data structure?

Sorry for the delay.
I believe it’s possible.

I made some tests with deepstream-app too and the problem didn’t occur when I used the same pipeline (primary + tee + tiled and demuxed sinks). Once I added the secondary models and analytics, the OSD annotations would sometimes flicker, though.
My guess is that, since it’s faster, any interferences are considerably less noticeable.

I thought the tee element would guarantee one branch wouldn’t affect the other, and both OSDs would be independent.
Is there any workaround?

Any updates on this matter? Is there any workaround for this problem?

Add nvvideoconvert after tee maybe one workaround.

     |->nvvideoconvert->...

tee->
|->nvvideoconvert->…

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.