Erorr when linking elements in Pipeline (Deepstream-5.1)

Hi everyone,
I am working on running ssd on deepstream-5.1 with deepstream_ssd_parser.py and then adding KLT tracker to track detected objects. My pipeline is: sourcebin → streammux → nvinferserver → nvtracker → nvdsanalytics → nvtiler → nvvideoconvert → nvdsosd → sink. Everything ran successfully until I added sink in my pipeline.

My sink element is:
sink = Gst.ElementFactory.make(“nveglglessink”, “nvvideo-renderer”)
sink.set_property(“qos”,0)

My goal is to display the output video/streaming video in the screen. I used container nvcr.io/nvidia/deepstream:5.1-21.02-triton. Could you guys help me with this issues ?
Thanks in advanced.

what’s the error?
What’s your platform? Please provide the setup info as other topic does

Thanks for your comment, I ran on local PC and I used docker environment as I mentioned.


I couldn’t start the pipeline when I used sink element with “nveglglessink” and “fakesink”. However, when I changed it into “filesink”, the pipeline ran successfully.
I don’t know why. Does it depend on my local PC setting ?

did you enable display with xhost and DISPLAY as below?

Yes, I did. And it still cannot run.

can you share all the changes in the sample?

There is already a sink, how do you connect your new sink? Please share deailed info

def make_elm_or_print_err(factoryname, name, printedname, detail=""):
    """ Creates an element with Gst Element Factory make.
        Return the element  if successfully created, otherwise print
        to stderr and return None.
    """
    print("Creating", printedname)
    elm = Gst.ElementFactory.make(factoryname, name)
    if not elm:
        sys.stderr.write("Unable to create " + printedname + " \n")
        if detail:
            sys.stderr.write(detail)
    return elm
def main(args):
    # Check input arguments
    if len(args) < 2:
        sys.stderr.write("usage: %s <uri1> [uri2] ... [uriN]\n" % args[0])
        sys.exit(1)

    for i in range(0, len(args) - 1):
        fps_streams["stream{0}".format(i)] = GETFPS(i)
    number_sources = len(args) - 1

    # Standard GStreamer initialization
    GObject.threads_init()
    Gst.init(None)

    # Create gstreamer elements */
    # Create Pipeline element that will form a connection of other elements
    print("Creating Pipeline \n ")
    pipeline = Gst.Pipeline()
    is_live = False

    if not pipeline:
        sys.stderr.write(" Unable to create Pipeline \n")
    print("Creating streamux \n ")

    # Create nvstreammux instance to form batches from one or more sources.
    streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
    if not streammux:
        sys.stderr.write(" Unable to create NvStreamMux \n")

    pipeline.add(streammux)
    for i in range(number_sources):
        print("Creating source_bin ", i, " \n ")
        uri_name = args[i + 1]
        if uri_name.find("rtsp://") == 0:
            is_live = True
        source_bin = create_source_bin(i, uri_name)
        if not source_bin:
            sys.stderr.write("Unable to create source bin \n")
        pipeline.add(source_bin)
        padname = "sink_%u" % i
        sinkpad = streammux.get_request_pad(padname)
        if not sinkpad:
            sys.stderr.write("Unable to create sink pad bin \n")
        srcpad = source_bin.get_static_pad("src")
        if not srcpad:
            sys.stderr.write("Unable to create src pad bin \n")
        srcpad.link(sinkpad)

    # Use nvinferserver to run inferencing on decoder's output,
    # behaviour of inferencing is set through config file
    pgie = make_elm_or_print_err("nvinferserver", "primary-inference", "Nvinferserver")

    # Use convertor to convert from NV12 to RGBA as required by nvosd
    nvvidconv = make_elm_or_print_err("nvvideoconvert", "convertor", "Nvvidconv")

    # Create OSD to draw on the converted RGBA buffer
    nvosd = make_elm_or_print_err("nvdsosd", "onscreendisplay", "OSD (nvosd)")
    nvosd.set_property("process-mode", 1)

    # sink = make_elm_or_print_err("filesink", "filesink", "Sink")
    sink = make_elm_or_print_err("nveglglessink", "nvvideo-renderer", "Sink")
    # sink = make_elm_or_print_err("fakesink", "fakesink", "Sink")
    # sink.set_property("location", OUTPUT_VIDEO_NAME)
    # sink.set_property("sync", 0)
    # sink.set_property("async", 0)
    sink.set_property("qos", 0)
    print("Playing file %s " % args[1])
    # source.set_property("location", args[1])
    if is_live:
        print("Atleast one of the sources is live")
        streammux.set_property('live-source', 1)

    streammux.set_property("width", IMAGE_WIDTH)
    streammux.set_property("height", IMAGE_HEIGHT)
    streammux.set_property("batch-size", 1)
    streammux.set_property("batched-push-timeout", 4000000)
    pgie.set_property("config-file-path", "dstest_ssd_nopostprocess.txt")

    print("Adding elements to Pipeline \n")
    pipeline.add(pgie)
    pipeline.add(nvvidconv)
    pipeline.add(nvosd)
    pipeline.add(sink)

    print("Linking elements in the Pipeline \n")

    # srcpad.link(sinkpad)
    # sinkpad.link(pgie)
    streammux.link(pgie)
    pgie.link(nvvidconv)
    nvvidconv.link(nvosd)
    nvosd.link(sink)

    # create an event loop and feed gstreamer bus mesages to it
    loop = GObject.MainLoop()
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect("message", bus_call, loop)

    # Add a probe on the primary-infer source pad to get inference output tensors
    pgiesrcpad = pgie.get_static_pad("src")
    if not pgiesrcpad:
        sys.stderr.write(" Unable to get src pad of primary infer \n")

    pgiesrcpad.add_probe(Gst.PadProbeType.BUFFER, pgie_src_pad_buffer_probe, 0)

    # Lets add probe to get informed of the meta data generated, we add probe to
    # the sink pad of the osd element, since by that time, the buffer would have
    # had got all the metadata.
    osdsinkpad = nvosd.get_static_pad("sink")
    # print("abc_sink: ", osdsinkpad)
    if not osdsinkpad:
        sys.stderr.write(" Unable to get sink pad of nvosd \n")

    osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)

    # start play back and listen to events
    print("Starting pipeline \n")
    pipeline.set_state(Gst.State.PLAYING)
    try:
        loop.run()
    except:
        pass
    # cleanup
    pipeline.set_state(Gst.State.NULL)

I referred deepstream_nvanalytics.py to add tracker to SSD. Could you give me some advice about that ?

Before we give advince, we need to check what change cause the issue. Can you please do me a favor - just share the diff you made based on deepstream-ssd-parser sample, to save us time to find the related change, repo the issue, find the root cause and give you advice? Could you ?

Okay. My goal is to add tracker to the deepstream-ssd-parser refer to the deepstream-nvanalytics. Additionally, I want to change the input to rstp stream and then display the output on the screen.

For deepstream-ssd-parser, I removed the source, h264parser, decoder elements and created source bin refered to deepstream-nvanalytics.

# Create nvstreammux instance to form batches from one or more sources.
    streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
    if not streammux:
        sys.stderr.write(" Unable to create NvStreamMux \n")

    pipeline.add(streammux)
    for i in range(number_sources):
        print("Creating source_bin ", i, " \n ")
        uri_name = args[i + 1]
        if uri_name.find("rtsp://") == 0:
            is_live = True
        source_bin = create_source_bin(i, uri_name)
        if not source_bin:
            sys.stderr.write("Unable to create source bin \n")
        pipeline.add(source_bin)
        padname = "sink_%u" % i
        sinkpad = streammux.get_request_pad(padname)
        if not sinkpad:
            sys.stderr.write("Unable to create sink pad bin \n")
        srcpad = source_bin.get_static_pad("src")
        if not srcpad:
            sys.stderr.write("Unable to create src pad bin \n")
        srcpad.link(sinkpad)

Next, I maintained the pige of deepstream-ssd-parser.

pgie = make_elm_or_print_err("nvinferserver", "primary-inference", "Nvinferserver")
pgie.set_property("config-file-path", "dstest_ssd_nopostprocess.txt")

I used the tracker of deepstream-nvanalytics:

tracker = Gst.ElementFactory.make("nvtracker", "tracker")
# Set properties of tracker
    config = configparser.ConfigParser()
    config.read('dsnvanalytics_tracker_config.txt')
    config.sections()

    for key in config['tracker']:
        if key == 'tracker-width':
            tracker_width = config.getint('tracker', key)
            tracker.set_property('tracker-width', tracker_width)
        if key == 'tracker-height':
            tracker_height = config.getint('tracker', key)
            tracker.set_property('tracker-height', tracker_height)
        if key == 'gpu-id':
            tracker_gpu_id = config.getint('tracker', key)
            tracker.set_property('gpu_id', tracker_gpu_id)
        if key == 'll-lib-file':
            tracker_ll_lib_file = config.get('tracker', key)
            tracker.set_property('ll-lib-file', tracker_ll_lib_file)
        if key == 'll-config-file':
            tracker_ll_config_file = config.get('tracker', key)
            tracker.set_property('ll-config-file', tracker_ll_config_file)
        if key == 'enable-batch-process':
            tracker_enable_batch_process = config.getint('tracker', key)
            tracker.set_property('enable_batch_process', tracker_enable_batch_process)
        if key == 'enable-past-frame':
            tracker_enable_past_frame = config.getint('tracker', key)
            tracker.set_property('enable_past_frame', tracker_enable_past_frame)

Then, I added nvanalytics and nvosd of deepstream-nvanalytics. Finally, I linked the sink element to the pipeline:

sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")
sink.set_property("qos",0)

Eventually, my pipeline is: streammux → pgie → tracker → nvanalytics → nvosd → sink. Everytime I add a new element, I run the pipeline. And everything work successfully until I add sink element. Futher, when I change the sink to “filesink”, the pipeline works.

Is everything clear ? Please let me know if you have any confusion. Thank you very much.

Could you just share your modified source files with us?

Sure.
test_rtsp_ssd.py (18.6 KB)

seems still needs some effort to find out how to run attached script…

please run
# export GST_DEBUG="*:2"
and run your DS command and sahre the fialure log

Thank you, here is the failure log.

As the log shows, the failure is “Could not init EGL display connection”, so you can’t use nveglglessink if you don’t have display device connected.

yeah, but I already have a connected screen. How do I fix it ?

How about results of xrandr within docker container?


Here is the result. When I do

echo $DISPLAY

The output is :0 which is correct

You are using system display environments within docker, you can get the value not surprisingly. please check xrandr results directly on your host, make sure you can get the display info and try again.

I checked the xrander on the host and it’s fine, isn’t it ?


I also run the script again and it doesn’t work.

You still can not get display info within docker?