Not able to to recover the video/channels using new streammux plus triton inference server

Hi Team,
This is my configuration, I am using USE_NEW_NVSTREAMMUX=yes
• DeepStream Version : deepstream-6.3

• JetPack Version (valid for Jetson only)

• TensorRT Version: 8.6.1.

• NVIDIA GPU Driver Version (valid for GPU only) : Driver Version: 535.86.10 CUDA Version: 12.2

• Issue Type( questions, new requirements, bugs): questions

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

deepstream_rt_src_add_del.py with slight changes.

I want to add and remove the channels, to simulate the recovery of channels, like rtsp drops/ EOS. Also I am forced to use the new streammux, because it helps to keep the order of frames within batch, whenever I am runningin triton inference server, with python backend. But in the newstreammux, I am facing issues, like frozen output screen. I am attaching the model.py, config pbtxt and deepstream_rt_src_add_del.py in a compressed file. Here the small changes I made on the deepstream_rt_src_add_del is to simulate the adding and removing the same src_{id} and sink_{id} which is happening in the original code.

While I am runnig, the code, I see following errors as well

root@ajith-OMEN-by-HP-Laptop-16-c0xxx:/opt/nvidia/deepstream/deepstream-6.3/sources/src# python3 deepstream_rt_src_add_del.py  file:///opt/nvidia/deepstream/deepstream-6.3/sources/src/2.mp4 
Creating Pipeline 
 
Creating streammux 
 
max_fps_dur 8.33333e+06 min_fps_dur 2e+08
Creating source_bin  0  
 
Creating uridecodebin for [file:///opt/nvidia/deepstream/deepstream-6.3/sources/src/2.mp4]
source-bin-00
Creating Pgie 
 
Creating nvtracker 
 
Creating tiler 
 
Creating nvvidconv 
 
Creating nvosd 
 
Creating EGLSink 

WARNING: Overriding infer-config batch-size 0  with number of sources  1  

Adding elements to Pipeline 

Linking elements in the Pipeline 

libEGL warning: MESA-LOADER: failed to retrieve device information

libEGL warning: DRI2: could not open /dev/dri/card1 (No such file or directory)
WARNING: infer_proto_utils.cpp:201 backend.trt_is is deprecated. updated it to backend.triton
W0423 11:42:13.975956 10378 metrics.cc:512] Unable to get power limit for GPU 0. Status:Success, value:0.000000
INFO: infer_trtis_backend.cpp:218 TrtISBackend id:1 initialized model: centerface
Decodebin child added: source 

Decodebin child added: decodebin0 

Now playing...
1 :  file:///opt/nvidia/deepstream/deepstream-6.3/sources/src/2.mp4
Starting pipeline 

Decodebin child added: qtdemux0 

Decodebin child added: multiqueue0 

Decodebin child added: h264parse0 

Decodebin child added: capsfilter0 

Warning: gst-stream-error-quark: No decoder available for type 'audio/mpeg, mpegversion=(int)4, framed=(boolean)true, stream-format=(string)raw, level=(string)2, base-profile=(string)lc, profile=(string)lc, codec_data=(buffer)12100000000000000000000000000000, rate=(int)44100, channels=(int)2'. (6): gsturidecodebin.c(920): unknown_type_cb (): /GstPipeline:pipeline0/GstURIDecodeBin:source-bin-00
Decodebin child added: nvv4l2decoder0 

In cb_newpad

gstname= video/x-raw
sink_0
Decodebin linked to pipeline
max_fps_dur 8.33333e+06 min_fps_dur 2e+08
W0423 11:42:14.976414 10378 metrics.cc:512] Unable to get power limit for GPU 0. Status:Success, value:0.000000
W0423 11:42:15.981258 10378 metrics.cc:512] Unable to get power limit for GPU 0. Status:Success, value:0.000000
Calling Start 2 
Creating uridecodebin for [file:///opt/nvidia/deepstream/deepstream-6.3/sources/src/2.mp4]
source-bin-02
Decodebin child added: source 

Decodebin child added: decodebin1 

Decodebin child added: qtdemux1 

Decodebin child added: multiqueue1 

Decodebin child added: h264parse1 

Decodebin child added: capsfilter1 

Decodebin child added: nvv4l2decoder1 

In cb_newpad

gstname= video/x-raw
sink_2
Decodebin linked to pipeline
Warning: gst-stream-error-quark: No decoder available for type 'audio/mpeg, mpegversion=(int)4, framed=(boolean)true, stream-format=(string)raw, level=(string)2, base-profile=(string)lc, profile=(string)lc, codec_data=(buffer)12100000000000000000000000000000, rate=(int)44100, channels=(int)2'. (6): gsturidecodebin.c(920): unknown_type_cb (): /GstPipeline:pipeline0/GstURIDecodeBin:source-bin-02
Calling Start 1 
Creating uridecodebin for [file:///opt/nvidia/deepstream/deepstream-6.3/sources/src/2.mp4]
source-bin-01
Decodebin child added: source 

Decodebin child added: decodebin2 

Decodebin child added: qtdemux2 

Decodebin child added: multiqueue2 

Decodebin child added: h264parse2 

Decodebin child added: capsfilter2 

Decodebin child added: nvv4l2decoder2 

In cb_newpad

gstname= video/x-raw
sink_1
Decodebin linked to pipeline
Calling Stop 0 
STATE CHANGE SUCCESS

sink_0
Decodebin child added: source 

Decodebin child added: decodebin3 

Decodebin child added: qtdemux3 

Decodebin child added: multiqueue3 

Decodebin child added: h264parse3 

Decodebin child added: capsfilter3 

Decodebin child added: nvv4l2decoder3 

In cb_newpad

gstname= video/x-raw
sink_0

(python3:10378): GStreamer-CRITICAL **: 17:12:34.975: Element Stream-muxer already has a pad named sink_0, the behaviour of  gst_element_get_request_pad() for existing pads is undefined!
Decodebin linked to pipeline
STATE CHANGE SUCCESS

Warning: gst-stream-error-quark: No decoder available for type 'audio/mpeg, mpegversion=(int)4, framed=(boolean)true, stream-format=(string)raw, level=(string)2, base-profile=(string)lc, profile=(string)lc, codec_data=(buffer)12100000000000000000000000000000, rate=(int)44100, channels=(int)2'. (6): gsturidecodebin.c(920): unknown_type_cb (): /GstPipeline:pipeline0/GstURIDecodeBin:source-bin-01
Warning: gst-stream-error-quark: No decoder available for type 'audio/mpeg, mpegversion=(int)4, framed=(boolean)true, stream-format=(string)raw, level=(string)2, base-profile=(string)lc, profile=(string)lc, codec_data=(buffer)12100000000000000000000000000000, rate=(int)44100, channels=(int)2'. (6): gsturidecodebin.c(920): unknown_type_cb (): /GstPipeline:pipeline0/GstURIDecodeBin:source-bin-00
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): gstbasesink.c(3003): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstEglGlesSink:nvvideo-renderer:
There may be a timestamping problem, or this computer is too slow.
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): gstbasesink.c(3003): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstEglGlesSink:nvvideo-renderer:
There may be a timestamping problem, or this computer is too slow.
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): gstbasesink.c(3003): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstEglGlesSink:nvvideo-renderer:
There may be a timestamping problem, or this computer is too slow.
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): gstbasesink.c(3003): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstEglGlesSink:nvvideo-renderer:
There may be a timestamping problem, or this computer is too slow.
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): gstbasesink.c(3003): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstEglGlesSink:nvvideo-renderer:
There may be a timestamping problem, or this computer is too slow.
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): gstbasesink.c(3003): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstEglGlesSink:nvvideo-renderer:
There may be a timestamping problem, or this computer is too slow.
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): gstbasesink.c(3003): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstEglGlesSink:nvvideo-renderer:
There may be a timestamping problem, or this computer is too slow.
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): gstbasesink.c(3003): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstEglGlesSink:nvvideo-renderer:
There may be a timestamping problem, or this computer is too slow.
^CExiting app

[ERROR push 353] push failed [-2]
[ERROR push 353] push failed [-2]
[ERROR push 353] push failed [-2]
[ERROR push 353] push failed [-2]
Cleaning up...
[ERROR push 353] push failed [-2]
[ERROR push 353] push failed [-2]
[ERROR push 353] push failed [-2]
[ERROR push 353] push failed [-2]
[ERROR push 353] push failed [-2]

centerface_fake.zip (11.6 KB)

After adding three channels, system supposed to remove a random channel, and then re add, so on and so forth, but frozen screen / slow frame rate is coming after the change. Please help me to identify the issue.

In fact,

The issue doesnt have anything to do with triton inference server. Please find pgie part removed.

#!/usr/bin/env python3

################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################

import sys
sys.path.append('../')
import gi
import configparser
gi.require_version('Gst', '1.0')
from gi.repository import Gst, GLib
from gi.repository import GLib
from ctypes import *
import time
import sys
import math
import random
import platform
from common.is_aarch_64 import is_aarch64

import pyds

MAX_DISPLAY_LEN=64
PGIE_CLASS_ID_VEHICLE = 0
PGIE_CLASS_ID_BICYCLE = 1
PGIE_CLASS_ID_PERSON = 2
PGIE_CLASS_ID_ROADSIGN = 3
MUXER_OUTPUT_WIDTH=1920
MUXER_OUTPUT_HEIGHT=1080
MUXER_BATCH_TIMEOUT_USEC = 33000
TILED_OUTPUT_WIDTH=1280
TILED_OUTPUT_HEIGHT=720
GPU_ID = 0
MAX_NUM_SOURCES = 4
SINK_ELEMENT = "nveglglessink"
PGIE_CONFIG_FILE = "dstest_pgie_config.txt"
TRACKER_CONFIG_FILE = "dstest_tracker_config.txt"

SGIE1_CONFIG_FILE = "dstest_sgie1_config.txt"
SGIE2_CONFIG_FILE = "dstest_sgie2_config.txt"

CONFIG_GPU_ID = "gpu-id"
CONFIG_GROUP_TRACKER = "tracker"
CONFIG_GROUP_TRACKER_WIDTH = "tracker-width"
CONFIG_GROUP_TRACKER_HEIGHT = "tracker-height"
CONFIG_GROUP_TRACKER_LL_CONFIG_FILE = "ll-config-file"
CONFIG_GROUP_TRACKER_LL_LIB_FILE = "ll-lib-file"

g_num_sources = 0
g_source_id_list = [0] * MAX_NUM_SOURCES
g_eos_list = [False] * MAX_NUM_SOURCES
g_source_enabled = [False] * MAX_NUM_SOURCES
g_source_bin_list = [None] * MAX_NUM_SOURCES

pgie_classes_str= ["Vehicle", "TwoWheeler", "Person","RoadSign"]

uri = ""

loop = None
pipeline = None
streammux = None
sink = None
sgie1 = None
sgie2 = None
nvvideoconvert = None
nvosd = None
tiler = None
tracker = None

def decodebin_child_added(child_proxy,Object,name,user_data):
    print("Decodebin child added:", name, "\n")
    if(name.find("decodebin") != -1):
        Object.connect("child-added",decodebin_child_added,user_data)   
    if(name.find("nvv4l2decoder") != -1):
        if (is_aarch64()):
            Object.set_property("enable-max-performance", True)
            Object.set_property("drop-frame-interval", 0)
            Object.set_property("num-extra-surfaces", 0)
        else:
            Object.set_property("gpu_id", GPU_ID)


def cb_newpad(decodebin,pad,data):
    global streammux
    print("In cb_newpad\n")
    caps=pad.get_current_caps()
    gststruct=caps.get_structure(0)
    gstname=gststruct.get_name()

    # Need to check if the pad created by the decodebin is for video and not
    # audio.
    print("gstname=",gstname)
    if(gstname.find("video")!=-1):
        source_id = data
        pad_name = "sink_%u" % source_id
        print(pad_name)
        #Get a sink pad from the streammux, link to decodebin
        sinkpad = streammux.get_request_pad(pad_name)
        if pad.link(sinkpad) == Gst.PadLinkReturn.OK:
            print("Decodebin linked to pipeline")
        else:
            sys.stderr.write("Failed to link decodebin to pipeline\n")


def create_uridecode_bin(index,filename):
    global g_source_id_list
    print("Creating uridecodebin for [%s]" % filename)

    # Create a source GstBin to abstract this bin's content from the rest of the
    # pipeline
    g_source_id_list[index] = index
    bin_name="source-bin-%02d" % index
    print(bin_name)

    # Source element for reading from the uri.
    # We will use decodebin and let it figure out the container format of the
    # stream and the codec and plug the appropriate demux and decode plugins.
    bin=Gst.ElementFactory.make("uridecodebin", bin_name)
    if not bin:
        sys.stderr.write(" Unable to create uri decode bin \n")
    # We set the input uri to the source element
    bin.set_property("uri",filename)
    # Connect to the "pad-added" signal of the decodebin which generates a
    # callback once a new pad for raw data has been created by the decodebin
    bin.connect("pad-added",cb_newpad,g_source_id_list[index])
    bin.connect("child-added",decodebin_child_added,g_source_id_list[index])

    #Set status of the source to enabled
    g_source_enabled[index] = True

    return bin


def stop_release_source(source_id):
    global g_num_sources
    global g_source_bin_list
    global streammux
    global pipeline

    #Attempt to change status of source to be released 
    state_return = g_source_bin_list[source_id].set_state(Gst.State.NULL)

    if state_return == Gst.StateChangeReturn.SUCCESS:
        print("STATE CHANGE SUCCESS\n")
        pad_name = "sink_%u" % source_id
        print(pad_name)
        #Retrieve sink pad to be released
        sinkpad = streammux.get_static_pad(pad_name)
        #Send flush stop event to the sink pad, then release from the streammux
        sinkpad.send_event(Gst.Event.new_flush_stop(False))
        streammux.release_request_pad(sinkpad)
        print("STATE CHANGE SUCCESS\n")
        #Remove the source bin from the pipeline
        pipeline.remove(g_source_bin_list[source_id])
        source_id -= 1
        g_num_sources -= 1

    elif state_return == Gst.StateChangeReturn.FAILURE:
        print("STATE CHANGE FAILURE\n")
    
    elif state_return == Gst.StateChangeReturn.ASYNC:
        state_return = g_source_bin_list[source_id].get_state(Gst.CLOCK_TIME_NONE)
        pad_name = "sink_%u" % source_id
        print(pad_name)
        sinkpad = streammux.get_static_pad(pad_name)
        sinkpad.send_event(Gst.Event.new_flush_stop(False))
        streammux.release_request_pad(sinkpad)
        print("STATE CHANGE ASYNC\n")
        pipeline.remove(g_source_bin_list[source_id])
        source_id -= 1
        g_num_sources -= 1


def delete_sources(data):
    global loop
    global g_num_sources
    global g_eos_list
    global g_source_enabled

    #First delete sources that have reached end of stream
    for source_id in range(MAX_NUM_SOURCES):
        if (g_eos_list[source_id] and g_source_enabled[source_id]):
            g_source_enabled[source_id] = False
            stop_release_source(source_id)

    #Quit if no sources remaining
    if (g_num_sources == 0):
        loop.quit()
        print("All sources stopped quitting")
        return False

    #Randomly choose an enabled source to delete
    source_id = random.randrange(0, MAX_NUM_SOURCES)
    while (not g_source_enabled[source_id]):
        source_id = random.randrange(0, MAX_NUM_SOURCES)
    #Disable the source
    g_source_enabled[source_id] = False
    #Release the source
    print("Calling Stop %d " % source_id)
    stop_release_source(source_id)

    #Quit if no sources remaining
    if (g_num_sources == 0):
        # loop.quit()

        print("All sources stopped quitting")
        return False

    return True


def add_sources(data):
    global g_num_sources
    global g_source_enabled
    global g_source_bin_list
    global pipeline

    source_id = g_num_sources

    #Randomly select an un-enabled source to add
    source_id = random.randrange(0, MAX_NUM_SOURCES)
    while (g_source_enabled[source_id]):
        source_id = random.randrange(0, MAX_NUM_SOURCES)

    #Enable the source
    g_source_enabled[source_id] = True

    print("Calling Start %d " % source_id)

    #Create a uridecode bin with the chosen source id
    source_bin = create_uridecode_bin(source_id, uri)

    if (not source_bin):
        sys.stderr.write("Failed to create source bin. Exiting.")
        exit(1)
    
    #Add source bin to our list and to pipeline
    g_source_bin_list[source_id] = source_bin
    pipeline.add(source_bin)

    #Set state of source bin to playing
    state_return = g_source_bin_list[source_id].set_state(Gst.State.PLAYING)

    if state_return == Gst.StateChangeReturn.SUCCESS:
        print("STATE CHANGE SUCCESS\n")
        source_id += 1

    elif state_return == Gst.StateChangeReturn.FAILURE:
        print("STATE CHANGE FAILURE\n")
    
    elif state_return == Gst.StateChangeReturn.ASYNC:
        state_return = g_source_bin_list[source_id].get_state(Gst.CLOCK_TIME_NONE)
        source_id += 1

    elif state_return == Gst.StateChangeReturn.NO_PREROLL:
        print("STATE CHANGE NO PREROLL\n")

    g_num_sources += 1

    #If reached the maximum number of sources, delete sources every 10 seconds
    if (g_num_sources == MAX_NUM_SOURCES-1):
        # GLib.timeout_add_seconds(10, delete_sources, g_source_bin_list)
        delete_sources(g_source_bin_list)
        return False

    
    return True

def bus_call(bus, message, loop):
    global g_eos_list
    t = message.type
    if t == Gst.MessageType.EOS:
        sys.stdout.write("End-of-stream\n")
        loop.quit()
    elif t==Gst.MessageType.WARNING:
        err, debug = message.parse_warning()
        sys.stderr.write("Warning: %s: %s\n" % (err, debug))
    elif t == Gst.MessageType.ERROR:
        err, debug = message.parse_error()
        sys.stderr.write("Error: %s: %s\n" % (err, debug))
        loop.quit()
    elif t == Gst.MessageType.ELEMENT:
        struct = message.get_structure()
        #Check for stream-eos message
        if struct is not None and struct.has_name("stream-eos"):
            parsed, stream_id = struct.get_uint("stream-id")
            if parsed:
                #Set eos status of stream to True, to be deleted in delete-sources
                print("Got EOS from stream %d" % stream_id)
                g_eos_list[stream_id] = True
    return True

def main(args):
    global g_num_sources
    global g_source_bin_list
    global uri

    global loop
    global pipeline
    global streammux
    global sink
    global sgie1
    global sgie2
    global nvvideoconvert
    global nvosd
    global tiler
    global tracker
     # Check input arguments
    if len(args) != 2:
        sys.stderr.write("usage: %s <uri1> \n" % args[0])
        sys.exit(1)

    num_sources=len(args)-1

    # Standard GStreamer initialization
    Gst.init(None)

    # Create gstreamer elements */
    # Create Pipeline element that will form a connection of other elements
    print("Creating Pipeline \n ")
    pipeline = Gst.Pipeline()
    is_live = False

    if not pipeline:
        sys.stderr.write(" Unable to create Pipeline \n")
    print("Creating streammux \n ")

    # Create nvstreammux instance to form batches from one or more sources.
    streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
    if not streammux:
        sys.stderr.write(" Unable to create NvStreamMux \n")

    # streammux.set_property("batched-push-timeout", 25000)
    streammux.set_property("batch-size", 30)
    # streammux.set_property("gpu_id", GPU_ID)

    pipeline.add(streammux)
    # streammux.set_property("live-source", 1)
    uri = args[1]
    for i in range(num_sources):
        print("Creating source_bin ",i," \n ")
        uri_name=args[i+1]
        if uri_name.find("rtsp://") == 0 :
            is_live = True
        #Create first source bin and add to pipeline
        source_bin=create_uridecode_bin(i, uri_name)
        if not source_bin:
            sys.stderr.write("Failed to create source bin. Exiting. \n")
            sys.exit(1)
        g_source_bin_list[i] = source_bin
        pipeline.add(source_bin)

    g_num_sources = num_sources


    print("Creating nvtracker \n ")
    tracker = Gst.ElementFactory.make("nvtracker", "tracker")
    if not tracker:
        sys.stderr.write(" Unable to create tracker \n")

    print("Creating tiler \n ")
    tiler=Gst.ElementFactory.make("nvmultistreamtiler", "nvtiler")
    if not tiler:
        sys.stderr.write(" Unable to create tiler \n")

    print("Creating nvvidconv \n ")
    nvvideoconvert = Gst.ElementFactory.make("nvvideoconvert", "convertor")
    if not nvvideoconvert:
        sys.stderr.write(" Unable to create nvvidconv \n")

    print("Creating nvosd \n ")
    nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")
    if not nvosd:
        sys.stderr.write(" Unable to create nvosd \n")

    sgie1 = Gst.ElementFactory.make("nvinfer", "secondary1-nvinference-engine")
    if not sgie1:
        sys.stderr.write(" Unable to make sgie1 \n")

    sgie2 = Gst.ElementFactory.make("nvinfer", "secondary2-nvinference-engine")
    if not sgie1:
        sys.stderr.write(" Unable to make sgie2 \n")


    if is_aarch64():
        print("Creating nv3dsink \n")
        sink = Gst.ElementFactory.make("nv3dsink", "nv3d-sink")
        if not sink:
            sys.stderr.write(" Unable to create nv3dsink \n")
    else:
        print("Creating EGLSink \n")
        sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")
        if not sink:
            sys.stderr.write(" Unable to create egl sink \n")
    if is_live:
        print("Atleast one of the sources is live")
        streammux.set_property('live-source', 1)

    #Set streammux width and height
    # streammux.set_property('width', MUXER_OUTPUT_WIDTH)
    # streammux.set_property('height', MUXER_OUTPUT_HEIGHT)
    #Set pgie, sgie1, and sgie2 configuration file paths

    #Set properties of tracker
    config = configparser.ConfigParser()
    config.read(TRACKER_CONFIG_FILE)
    config.sections()

    #Set necessary properties of the nvinfer element, the necessary ones are:
    # pgie_batch_size=pgie.get_property("batch-size")
    # if(pgie_batch_size < MAX_NUM_SOURCES):
    #     print("WARNING: Overriding infer-config batch-size",pgie_batch_size," with number of sources ", num_sources," \n")
    # pgie.set_property("batch-size",MAX_NUM_SOURCES)

    #Set gpu IDs of the inference engines
    # pgie.set_property("gpu_id", GPU_ID)


    #Set tiler properties
    tiler_rows=int(math.sqrt(num_sources))
    tiler_columns=int(math.ceil((1.0*num_sources)/tiler_rows))
    tiler.set_property("rows",tiler_rows)
    tiler.set_property("columns",tiler_columns)
    tiler.set_property("width", TILED_OUTPUT_WIDTH)
    tiler.set_property("height", TILED_OUTPUT_HEIGHT)

    #Set gpu IDs of tiler, nvvideoconvert, and nvosd
    tiler.set_property("gpu_id", GPU_ID)
    nvvideoconvert.set_property("gpu_id", GPU_ID)
    nvosd.set_property("gpu_id", GPU_ID)

    #Set gpu ID of sink if not aarch64
    if(not is_aarch64()):
        sink.set_property("gpu_id", GPU_ID)

    print("Adding elements to Pipeline \n")
    pipeline.add(tiler)
    pipeline.add(nvvideoconvert)
    pipeline.add(nvosd)
    pipeline.add(sink)

    # We link elements in the following order:
    # sourcebin -> streammux -> nvinfer -> nvtracker -> nvdsanalytics ->
    # nvtiler -> nvvideoconvert -> nvdsosd -> (if aarch64, transform ->) sink
    print("Linking elements in the Pipeline \n")
    # streammux.link(pgie)
    # pgie.link(tiler)
    streammux.link(tiler)
    tiler.link(nvvideoconvert)
    nvvideoconvert.link(nvosd)
    nvosd.link(sink)

    sink.set_property("sync", 1)
    sink.set_property("qos",0)

    # create an event loop and feed gstreamer bus mesages to it
    loop = GLib.MainLoop()
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect ("message", bus_call, loop)

    pipeline.set_state(Gst.State.PAUSED)

    # List the sources
    print("Now playing...")
    for i, source in enumerate(args):
        if (i != 0):
            print(i, ": ", source)

    print("Starting pipeline \n")
    # start play back and listed to events		
    pipeline.set_state(Gst.State.PLAYING)

    GLib.timeout_add_seconds(10, add_sources, g_source_bin_list)

    try:
        loop.run()
    except:
        pass
    # cleanup
    print("Exiting app\n")
    pipeline.set_state(Gst.State.NULL)

if __name__ == '__main__':
    sys.exit(main(sys.argv))

After adding third, channel added I am trying to remove and readd the same channels, but it is frozen. rather than gives a smooth channel.