Different trackID between PGIE (car detector) and SGIE (plate detector)

Please provide complete information as applicable to your setup.

Hardware Platform (Jetson / GPU) Jetson NX
DeepStream Version 5.0
JetPack Version (valid for Jetson only) JetPack 4.5

Early this month i’ve managed to make my deepstream pipeline works to detect cars and their plates, check: DeepStream using TrafficCamNet as PGIE LPR as SGIE.

However i am experiencing new problems recently, hoping i counld get some help here. So long story short, the object_id in PGIE and SGIE were different, so i couldn’t find the one car’s cooresponding plate, i was trying to find a way to preserve the trackID, or some workarounds to match car and plate.

The pipeline i was using is:

streammux.link(queue1)
queue1.link(pgie) # pgie: car detector
pgie.link(queue2)
queue2.link(sgie1)  # sgie1: plate detector 
sgie1.link(queue3)
queue3.link(tracker)
tracker.link(sgie2) # sgie2: plate number recognizor
sgie2.link(queue6)
queue6.link(tiler)
tiler.link(queue7)
queue7.link(nvvidconv)
nvvidconv.link(queue8)
queue8.link(nvosd)
nvosd.link(tee)
queue9.link(nvvidconv_postosd)
nvvidconv_postosd.link(caps)
caps.link(encoder)
encoder.link(codecparse)
codecparse.link(flvmux)
flvmux.link(sink2)

The pipeline works, the sgie1 managed to detect the car’s plate after passing down the result coming from pgie. While the trackID was different between PGIE and SGIE:

As you can see for example, in the first picture, black car’s trackID is 43 while its license plate’s trackID is 44. So when i use obj_meta to extract the information it’s hard to match cooreponding plates to the right cars, since their ids are not the same.

But the result was passing down from a pgie to a sgie, is there any connection between the trackID in PGIE and SGIE? So i can match the id result, for example in the second pic, the license_plate 32 is car 33’s SGIE tracking result.

My configs are as follows:

PGIE config

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
# model-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel
# proto-file=../../../../samples/models/Primary_Detector/resnet10.prototxt
model-engine-file=resnet18_detector.etlt_b16_gpu0_int8.engine
# model-engine-file=resnet18_trafficcamnet_pruned.etlt_b4_gpu0_int8.engine
# model-engine-file=ccpd_pruned.etlt_b16_gpu0_int8.engine

labelfile-path=labels.txt
int8-calib-file=../../../../samples/models/Primary_Detector/cal_trt.bin
# force-implicit-batch-dim=1
batch-size=16
process-mode=1
model-color-format=0
network-mode=1
num-detected-classes=1
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid

[class-attrs-all]
pre-cluster-threshold=0.2
eps=0.2
group-threshold=1

SGIE1 config (license plate detector):

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-engine-file=ccpd_pruned.etlt_b16_gpu0_int8.engine
labelfile-path=labels_lpd.txt
force-implicit-batch-dim=1
batch-size=16
network-mode=1
num-detected-classes=1
##1 Primary 2 Secondary
process-mode=2
interval=0
gie-unique-id=2
#0 detector 1 classifier 2 segmentatio 3 instance segmentation
network-type=0
operate-on-gie-id=1
operate-on-class-ids=0
model-color-format=0
#no cluster
cluster-mode=3
output-blob-names=output_cov/Sigmoid;output_bbox/BiasAdd
# output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
input-object-min-height=30
input-object-min-width=40


[class-attrs-all]
pre-cluster-threshold=0.3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

SGIE2 config (number recognizor):

[property]
gpu-id=0
model-engine-file=lpr_ch_onnx_b16.engine
labelfile-path=labels_ch.txt
batch-size=16
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
interval=0
num-detected-classes=67
gie-unique-id=4
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid
#0=Detection 1=Classifier 2=Segmentation
network-type=1
parse-classifier-func-name=NvDsInferParseCustomNVPlate
custom-lib-path=libnvdsinfer_custom_impl_lpr.so
process-mode=2
operate-on-gie-id=2
net-scale-factor=0.00392156862745098
#net-scale-factor=1.0
#0=RGB 1=BGR 2=GRAY
model-color-format=0
maintain-aspect-ratio=0
#scaling-compute-hw=2

[class-attrs-all]
threshold=0.5

Main Code:

import argparse
import sys
sys.path.append('../')
import gi
gi.require_version('Gst', '1.0')
gi.require_version('GstRtspServer', '1.0')
import configparser
gi.require_version('Gst', '1.0')
from gi.repository import GObject, Gst, GstRtspServer
from gi.repository import GLib
from ctypes import *
import time
import sys
import math
import platform
from common.is_aarch_64 import is_aarch64
from common.bus_call import bus_call
from common.FPS import GETFPS
import time
import pyds

fps_streams={}


MAX_DISPLAY_LEN=64
PGIE_CLASS_ID_VEHICLE = 0
PGIE_CLASS_ID_BICYCLE = 1
PGIE_CLASS_ID_PERSON = 2
PGIE_CLASS_ID_ROADSIGN = 3

# define sgie config according to labels -- RJ
# car color sgie1 black;blue;brown;gold;green;grey;maroon;orange;red;silver;white;yellow
# car maker acura;audi;bmw;chevrolet;chrysler;dodge;ford;gmc;honda;hyundai;infiniti;jeep;kia;lexus;mazda;mercedes;nissan;subaru;toyota;volkswagen
# car type coupe;largevehicle;sedan;suv;truck;van
# CAR_COLOR = ["black", "blue", "brown", "gold", "green", "grey", "maroon", "orange", "red", "silver", "white", "yellow"]

MUXER_OUTPUT_WIDTH=640
MUXER_OUTPUT_HEIGHT=480
MUXER_BATCH_TIMEOUT_USEC=4000000
TILED_OUTPUT_WIDTH=1920
TILED_OUTPUT_HEIGHT=1080
GST_CAPS_FEATURES_NVMM="memory:NVMM"
OSD_PROCESS_MODE= 1
OSD_DISPLAY_TEXT= 1
pgie_classes_str= ["Vehicle", "TwoWheeler", "Person","RoadSign"]

# nvanlytics_src_pad_buffer_probe  will extract metadata received on nvtiler sink pad
# and update params for drawing rectangle, object information etc.

def get_plate_number(frame_number, obj_meta):
    plate_list = []
    cls_meta = obj_meta.classifier_meta_list
    # print('im in')
    while cls_meta is not None:
        cls = pyds.NvDsClassifierMeta.cast(cls_meta.data)
        info = cls.label_info_list  # type of pyds.GList
        while info is not None:
            label_meta = pyds.glist_get_nvds_label_info(info.data)
            # class_meta = pyds.glist_get_nvds_classifier_meta(info.data)
            if cls.unique_component_id == 4:
                print("output is {}, type is {}".format(label_meta.result_label, type(label_meta.result_label)))
                return label_meta.result_label
                # print('\tplate number is = {}'.format(label_meta.result_label))
                # print('\tplate number is = {}'.format(class_meta.result_class_id))
                # print(class_meta.num_labels)
            try:
                info = info.next
            except StopIteration:
                break

            try:
                cls_meta = cls_meta.next
            except StopIteration:
                break


# output LineCrossingNumber
def LineCrossingPrint(dict_currentLC, dict_accumLC):
    # get dict's keys
    lane_nums = list(dict_currentLC.keys())
    for i in range(len(lane_nums)):
        if dict_currentLC[lane_nums[i]] != 0:
            print("{0}  is {1}".format(lane_nums[i], dict_accumLC[lane_nums[i]]))
plate_dict={}
def nvanalytics_src_pad_buffer_probe(pad,info,u_data):

    frame_number=0
    num_rects=0

    gst_buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer ")
        return

    # Retrieve batch metadata from the gst_buffer
    # Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
    # C address of gst_buffer as input, which is obtained with hash(gst_buffer)
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    l_frame = batch_meta.frame_meta_list
    # print('sequence one')
    plate_number = ''
    global plate_dict
    while l_frame:
        try:
            # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
            # The casting is done by pyds.NvDsFrameMeta.cast()
            # The casting also keeps ownership of the underlying memory
            # in the C code, so the Python garbage collector will leave
            # it alone.
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break



        frame_number=frame_meta.frame_num
        print('------frame number {} --------'.format(frame_number))
        l_obj=frame_meta.obj_meta_list
        num_rects = frame_meta.num_obj_meta
        obj_counter = {
        PGIE_CLASS_ID_VEHICLE:0,
        PGIE_CLASS_ID_PERSON:0,
        PGIE_CLASS_ID_BICYCLE:0,
        PGIE_CLASS_ID_ROADSIGN:0
        }
        # print("#"*50)
        while l_obj:
            try: 
                # Note that l_obj.data needs a cast to pyds.NvDsObjectMeta
                # The casting is done by pyds.NvDsObjectMeta.cast()
                obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
                cls_meta = obj_meta.classifier_meta_list
                while cls_meta is not None:
                    cls = pyds.NvDsClassifierMeta.cast(cls_meta.data)
                    info = cls.label_info_list  # type of pyds.GList
                    while info is not None:

                        label_meta = pyds.glist_get_nvds_label_info(info.data)
                        if cls.unique_component_id == 4:
                            print('obj_id = {}, plate = {}'.format(obj_meta.object_id, label_meta.result_label))
                            # obj_meta.object_id =
                        try:
                            info = info.next
                        except StopIteration:
                            break

                    try:
                        cls_meta = cls_meta.next
                    except StopIteration:
                        break
            except StopIteration:
                print('------stop-------')
                break


            # obj_counter[obj_meta.class_id] += 1
            # obj_meta.rect_params.has_bg_color = 1
            # obj_meta.rect_params.border_color.set(0.5, 1.0, 1.0, 1.0)
            # obj_meta.rect_params.bg_color.set(0.5, 1.0, 1.0, 0.2)
            # obj_meta.text_params.set_bg_clr = 1
            # obj_meta.text_params.text_bg_clr.set(0.5, 1.0, 1.0, 0.2)
            # obj_meta.text_params.font_params.font_size = 14
            # print('obj_meta.object_id: {} obj_meta.'.format(obj_meta.object_id))        
            try:
                l_obj=l_obj.next
            except StopIteration:
                break
   
        
        # print("Frame Number=", frame_number, "stream id=", frame_meta.pad_index, "Number of Objects=",num_rects,"Vehicle_count=",obj_counter[PGIE_CLASS_ID_VEHICLE],"Person_count=",obj_counter[PGIE_CLASS_ID_PERSON])
        # Get frame rate through this probe
        fps_streams["stream{0}".format(frame_meta.pad_index)].get_fps()

        # Acquiring a display meta object. The memory ownership remains in
        # the C code so downstream plugins can still access it. Otherwise
        # the garbage collector will claim it when this probe function exits.
        display_meta = pyds.nvds_acquire_display_meta_from_pool(batch_meta)
        display_meta.num_labels = 2
        py_nvosd_text_params = display_meta.text_params[0]
        # Setting display text to be shown on screen
        # Note that the pyds module allocates a buffer for the string, and the
        # memory will not be claimed by the garbage collector.
        # Reading the display_text field here will return the C address of the
        # allocated string. Use pyds.get_string() to get the string content

        try:
            l_frame=l_frame.next
        except StopIteration:
            break
        print('------------ \n')
        # print("#"*50)


    return Gst.PadProbeReturn.OK



def cb_newpad(decodebin, decoder_src_pad,data):
    print("In cb_newpad\n")
    caps=decoder_src_pad.get_current_caps()
    gststruct=caps.get_structure(0)
    gstname=gststruct.get_name()
    source_bin=data
    features=caps.get_features(0)

    # Need to check if the pad created by the decodebin is for video and not
    # audio.
    print("gstname=",gstname)
    if(gstname.find("video")!=-1):
        # Link the decodebin pad only if decodebin has picked nvidia
        # decoder plugin nvdec_*. We do this by checking if the pad caps contain
        # NVMM memory features.
        print("features=",features)
        if features.contains("memory:NVMM"):
            # Get the source bin ghost pad
            bin_ghost_pad=source_bin.get_static_pad("src")
            if not bin_ghost_pad.set_target(decoder_src_pad):
                sys.stderr.write("Failed to link decoder src pad to source bin ghost pad\n")
        else:
            sys.stderr.write(" Error: Decodebin did not pick nvidia decoder plugin.\n")

def decodebin_child_added(child_proxy,Object,name,user_data):
    print("Decodebin child added:", name, "\n")
    if(name.find("decodebin") != -1):
        Object.connect("child-added",decodebin_child_added,user_data)   
    if(is_aarch64() and name.find("nvv4l2decoder") != -1):
        print("Seting bufapi_version\n")
        Object.set_property("bufapi-version",True)

def create_source_bin(index,uri):
    print("Creating source bin")

    # Create a source GstBin to abstract this bin's content from the rest of the
    # pipeline
    bin_name="source-bin-%02d" %index
    print(bin_name)
    nbin=Gst.Bin.new(bin_name)
    if not nbin:
        sys.stderr.write(" Unable to create source bin \n")

    # Source element for reading from the uri.
    # We will use decodebin and let it figure out the container format of the
    # stream and the codec and plug the appropriate demux and decode plugins.
    uri_decode_bin=Gst.ElementFactory.make("uridecodebin", "uri-decode-bin")
    if not uri_decode_bin:
        sys.stderr.write(" Unable to create uri decode bin \n")
    # We set the input uri to the source element
    uri_decode_bin.set_property("uri",uri)
    # Connect to the "pad-added" signal of the decodebin which generates a
    # callback once a new pad for raw data has beed created by the decodebin
    uri_decode_bin.connect("pad-added",cb_newpad,nbin)
    uri_decode_bin.connect("child-added",decodebin_child_added,nbin)

    # We need to create a ghost pad for the source bin which will act as a proxy
    # for the video decoder src pad. The ghost pad will not have a target right
    # now. Once the decode bin creates the video decoder and generates the
    # cb_newpad callback, we will set the ghost pad target to the video decoder
    # src pad.
    Gst.Bin.add(nbin,uri_decode_bin)
    bin_pad=nbin.add_pad(Gst.GhostPad.new_no_target("src",Gst.PadDirection.SRC))
    if not bin_pad:
        sys.stderr.write(" Failed to add ghost pad in source bin \n")
        return None
    return nbin

def main(args):
    # Check input arguments
    args.append('file:/home/nvidia/Desktop/plate.MP4')
    if len(args) < 2:
        sys.stderr.write("usage: %s <uri1> [uri2] ... [uriN]\n" % args[0])
        sys.exit(1)

    for i in range(0,len(args)-1):
        fps_streams["stream{0}".format(i)]=GETFPS(i)
    number_sources=len(args)-1



    # Standard GStreamer initialization
    GObject.threads_init()
    Gst.init(None)

    # Create gstreamer elements */
    # Create Pipeline element that will form a connection of other elements
    print("Creating Pipeline \n ")
    pipeline = Gst.Pipeline()
    is_live = False

    if not pipeline:
        sys.stderr.write(" Unable to create Pipeline \n")
    print("Creating streamux \n ")

    # Create nvstreammux instance to form batches from one or more sources.
    streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
    if not streammux:
        sys.stderr.write(" Unable to create NvStreamMux \n")

    pipeline.add(streammux)

    #############################################################################################

    for i in range(number_sources):
        print("Creating source_bin ", i, " \n ")
        uri_name = args[i + 1]
        if uri_name.find("rtsp://") == 0:
            is_live = True
        source_bin = create_source_bin(i, uri_name)
        if not source_bin:
            sys.stderr.write("Unable to create source bin \n")
        pipeline.add(source_bin)
        padname = "sink_%u" % i
        sinkpad = streammux.get_request_pad(padname)
        if not sinkpad:
            sys.stderr.write("Unable to create sink pad bin \n")
        srcpad = source_bin.get_static_pad("src")
        if not srcpad:
            sys.stderr.write("Unable to create src pad bin \n")
        srcpad.link(sinkpad)

    # Adding videoconvert -> nvvideoconvert as not all
    # raw formats are supported by nvvideoconvert;
    # Say YUYV is unsupported - which is the common
    # raw format for many logi usb cams
    # In case we have a camera with raw format supported in
    # nvvideoconvert, GStreamer plugins' capability negotiation
    # shall be intelligent enough to reduce compute by
    # videoconvert doing passthrough (TODO we need to confirm this)

    # videoconvert to make sure a superset of raw formats are supported

    # adding usb module -- RJ

    #########################################################################

    queue1=Gst.ElementFactory.make("queue","queue1")
    queue2=Gst.ElementFactory.make("queue","queue2")
    queue3=Gst.ElementFactory.make("queue","queue3")
    queue4=Gst.ElementFactory.make("queue","queue4")
    queue5=Gst.ElementFactory.make("queue","queue5")
    queue6=Gst.ElementFactory.make("queue","queue6")
    queue7=Gst.ElementFactory.make("queue","queue7")
    queue8=Gst.ElementFactory.make("queue","queue8")
    queue9=Gst.ElementFactory.make("queue","rtsp_queue")
    queue10=Gst.ElementFactory.make("queue","rtmp_queue")
    pipeline.add(queue1)
    pipeline.add(queue2)
    pipeline.add(queue3)
    pipeline.add(queue4)
    pipeline.add(queue5)
    pipeline.add(queue6)
    pipeline.add(queue7)
    pipeline.add(queue8)
    pipeline.add(queue9)
    pipeline.add(queue10)


    print("Creating Pgie \n ")
    pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
    if not pgie:
        sys.stderr.write(" Unable to create pgie \n")

    print("Creating nvtracker \n ")
    tracker = Gst.ElementFactory.make("nvtracker", "tracker")
    if not tracker:
        sys.stderr.write(" Unable to create tracker \n")

    tracker2 = Gst.ElementFactory.make("nvtracker", "tracker2")
    if not tracker2:
        sys.stderr.write(" Unable to create tracker \n")


    sgie1 = Gst.ElementFactory.make("nvinfer", "secondary1-nvinference-engine")
    if not sgie1:
        sys.stderr.write(" Unable to make sgie1 \n")

    sgie2 = Gst.ElementFactory.make("nvinfer", "secondary2-nvinference-engine")
    if not sgie1:
        sys.stderr.write(" Unable to make sgie2 \n")

    sgie3 = Gst.ElementFactory.make("nvinfer", "secondary3-nvinference-engine")
    if not sgie3:
        sys.stderr.write(" Unable to make sgie3 \n")

    sgie4 = Gst.ElementFactory.make("nvinfer", "secondary4-nvinference-engine")
    if not sgie4:
        sys.stderr.write(" Unable to make sgie4 \n")

    print("Creating nvdsanalytics \n ")
    nvanalytics = Gst.ElementFactory.make("nvdsanalytics", "analytics")
    if not nvanalytics:
        sys.stderr.write(" Unable to create nvanalytics \n")
    nvanalytics.set_property("config-file", "config_nvdsanalytics.txt")

    print("Creating tiler \n ")
    tiler=Gst.ElementFactory.make("nvmultistreamtiler", "nvtiler")
    if not tiler:
        sys.stderr.write(" Unable to create tiler \n")

    print("Creating nvvidconv \n ")
    nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
    if not nvvidconv:
        sys.stderr.write(" Unable to create nvvidconv \n")

    print("Creating nvosd \n ")
    nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")
    if not nvosd:
        sys.stderr.write(" Unable to create nvosd \n")
    nvosd.set_property('process-mode',OSD_PROCESS_MODE)
    nvosd.set_property('display-text',OSD_DISPLAY_TEXT)
    # nvosd.set_property('display-text',OSD_DISPLAY_TEXT)

    if(is_aarch64()):
        print("Creating transform \n ")
        transform=Gst.ElementFactory.make("nvegltransform", "nvegl-transform")
        if not transform:
            sys.stderr.write(" Unable to create transform \n")

    # print("Creating EGLSink \n")
    # sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")
    # if not sink:
    #     sys.stderr.write(" Unable to create egl sink \n")

    if is_live:
        print("Atleast one of the sources is live")
        streammux.set_property('live-source', 1)

    streammux.set_property('width', 1920)
    streammux.set_property('height', 1080)
    streammux.set_property('batch-size', number_sources)
    streammux.set_property('batched-push-timeout', 4000000)

    # set properties of pgie and sgie
    pgie.set_property('config-file-path', "dsnvanalytics_pgie_config.txt")

    sgie1.set_property('config-file-path', "sgie1_config.txt")
    sgie2.set_property('config-file-path', "sgie2_config.txt")
    sgie3.set_property('config-file-path', "sgie3_config.txt")
    sgie4.set_property('config-file-path', "sgie4_config.txt")
    # sgie1.set_property('process-mode', 2)
    # sgie3.set_property('process-mode', 2)
    pgie_batch_size=pgie.get_property("batch-size")
    if(pgie_batch_size <= number_sources):
        print("WARNING: Overriding infer-config batch-size",pgie_batch_size," with number of sources ", number_sources," \n")
        pgie.set_property("batch-size",number_sources)
    tiler_rows=int(math.sqrt(number_sources))
    tiler_columns=int(math.ceil((1.0*number_sources)/tiler_rows))
    tiler.set_property("rows",tiler_rows)
    tiler.set_property("columns",tiler_columns)
    tiler.set_property("width", TILED_OUTPUT_WIDTH)
    tiler.set_property("height", TILED_OUTPUT_HEIGHT)
    # sink.set_property("qos",0)

    #Set properties of tracker
    config = configparser.ConfigParser()
    config.read('dsnvanalytics_tracker_config.txt')
    config.sections()

    # Newly coded-in pipeline add -- RJ
    nvvidconv_postosd = Gst.ElementFactory.make("nvvideoconvert", "convertor_postosd")
    if not nvvidconv_postosd:
        sys.stderr.write(" Unable to create nvvidconv_postosd \n")
    # Create a caps filter  --RJ
    caps = Gst.ElementFactory.make("capsfilter", "filter")
    caps.set_property("caps", Gst.Caps.from_string("video/x-raw(memory:NVMM), format=I420"))

    # Make the encoder  --RJ
    if codec == "H264":
        encoder = Gst.ElementFactory.make("nvv4l2h264enc", "encoder")
        encoder2 = Gst.ElementFactory.make("nvv4l2h264enc", "encoder2")
        print("Creating H264 Encoder")
    elif codec == "H265":
        encoder = Gst.ElementFactory.make("nvv4l2h265enc", "encoder")
        print("Creating H265 Encoder")
    if not encoder:
        sys.stderr.write(" Unable to create encoder")
    encoder.set_property('bitrate', bitrate)
    encoder2.set_property('bitrate', bitrate)
    if is_aarch64():
        encoder.set_property('preset-level', 1)
        encoder.set_property('insert-sps-pps', 1)
        encoder.set_property('bufapi-version', 1)
        encoder2.set_property('preset-level', 1)
        encoder2.set_property('insert-sps-pps', 1)
        encoder2.set_property('bufapi-version', 1)

    # Make the payload-encode video into RTP packets -- RJ
    if codec == "H264":
        rtppay = Gst.ElementFactory.make("rtph264pay", "rtppay")
        print("Creating H264 rtppay")
    elif codec == "H265":
        rtppay = Gst.ElementFactory.make("rtph265pay", "rtppay")
        print("Creating H265 rtppay")
    if not rtppay:
        sys.stderr.write(" Unable to create rtppay")
    # Make the codec  -- RJ
    codecparse = Gst.ElementFactory.make("h264parse", "h264-parser")
    if not codecparse:
        sys.stderr.write(" Unable to create h264parse")

    # flvmux
    flvmux = Gst.ElementFactory.make("flvmux", "flvmux")
    if not flvmux:
        sys.stderr.write(" Unable to create flvmux")

    # Make the tee -- RJ
    tee=Gst.ElementFactory.make("tee", "nvsink-tee")
    if not tee:
        sys.stderr.write(" Unable to create tee \n")

    # Make the UDP sink
    # updsink_port_num = 5400
    # sink = Gst.ElementFactory.make("udpsink", "udpsink")
    # if not sink:
    #     sys.stderr.write(" Unable to create udpsink")
    #
    # sink.set_property('host', '224.224.255.255')
    # sink.set_property('port', updsink_port_num)
    # sink.set_property('async', False)
    # sink.set_property('sync', 1)

    # Make the rtmp sink
    sink2 = Gst.ElementFactory.make("rtmpsink", "rtmpsink")
    if not sink2:
        sys.stderr.write(" Unable to create udpsink2")
    sink2.set_property('location', 'rtmp://localhost:1935/hls/url')

    # Make the eglsink
    sink1 = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")
    if not sink1:
        sys.stderr.write(" Unable to create egl sink \n")
    sink1.set_property('sync', 0)
    sink1.set_property("qos",0)

    for key in config['tracker']:
        if key == 'tracker-width' :
            tracker_width = config.getint('tracker', key)
            tracker.set_property('tracker-width', tracker_width)
        if key == 'tracker-height' :
            tracker_height = config.getint('tracker', key)
            tracker.set_property('tracker-height', tracker_height)
        if key == 'gpu-id' :
            tracker_gpu_id = config.getint('tracker', key)
            tracker.set_property('gpu_id', tracker_gpu_id)
        if key == 'll-lib-file' :
            tracker_ll_lib_file = config.get('tracker', key)
            tracker.set_property('ll-lib-file', tracker_ll_lib_file)
        if key == 'll-config-file' :
            tracker_ll_config_file = config.get('tracker', key)
            tracker.set_property('ll-config-file', tracker_ll_config_file)
        if key == 'enable-batch-process' :
            tracker_enable_batch_process = config.getint('tracker', key)
            tracker.set_property('enable_batch_process', tracker_enable_batch_process)
        if key == 'enable-past-frame' :
            tracker_enable_past_frame = config.getint('tracker', key)
            tracker.set_property('enable_past_frame', tracker_enable_past_frame)

    print("Adding elements to Pipeline \n")
    pipeline.add(pgie)
    pipeline.add(tracker)
    pipeline.add(sgie1)
    pipeline.add(sgie2)
    pipeline.add(sgie3)
    pipeline.add(sgie4)
    pipeline.add(nvanalytics)
    pipeline.add(tiler)
    pipeline.add(nvvidconv)
    pipeline.add(nvosd)
    pipeline.add(nvvidconv_postosd)
    pipeline.add(caps)
    pipeline.add(encoder)
    pipeline.add(tee)
    # pipeline.add(sink)
    pipeline.add(flvmux)
    pipeline.add(codecparse)
    pipeline.add(sink2)
    pipeline.add(transform)
    pipeline.add(sink1)

    # We link elements in the following order:
    # sourcebin -> streammux -> nvinfer -> nvtracker -> nvdsanalytics ->
    # nvtiler -> nvvideoconvert -> nvdsosd -> sink
    
    streammux.link(queue1)
    queue1.link(pgie)
    pgie.link(queue2)
    # queue2.link(nvanalytics)
    queue2.link(sgie4)      # sgie4: car color
    # nvanalytics.link(tracker)
    # tracker.link(sgie4)
    # queue2.link(queue3)
    #   link sgie2 sgie1 etc
    # tracker.link(queue3)
    # queue2.link(tracker)
    # queue2.link(sgie3)
    # sgie3.link(queue3)
    # queue3.link(nvanalytics)
    # nvanalytics.link(queue4)
    # queue4.link(sgie4)
    # tracker.link(sgie1)
    sgie4.link(sgie1)   # sgie1 plate
    sgie1.link(queue5)
    queue5.link(tracker)
    tracker.link(sgie3)
    sgie3.link(nvanalytics)

    # queue5.link(sgie3)  # sgie3 number
    # sgie3.link(tracker)
    # tracker.link(nvanalytics)
    nvanalytics.link(queue6)
    queue6.link(tiler)
    tiler.link(queue7)
    queue7.link(nvvidconv)
    nvvidconv.link(queue8)
    queue8.link(nvosd)
    # queue6.link(tee)

     # Start linking tee for RTSP -- RJ
    # nvosd.link(tee)
    # queue7.link(nvvidconv_postosd)
    # nvvidconv_postosd.link(caps)
    # caps.link(encoder)
    # encoder.link(rtppay)
    # rtppay.link(sink)

    # Start linking tee for RTMP -- RJ
    nvosd.link(tee)
    queue9.link(nvvidconv_postosd)
    nvvidconv_postosd.link(caps)
    caps.link(encoder)
    encoder.link(codecparse)
    codecparse.link(flvmux)
    flvmux.link(sink2)

     # Start linking tee for egl -- RJ
    queue10.link(transform)
    transform.link(sink1)


    # Manually link the tee, which has request pads

    tee_rtsp_pad = tee.get_request_pad("src_%u")
    print("rtsp tee pad branch obtained")

    queue7_pad = queue9.get_static_pad("sink")

    tee_elg_pad = tee.get_request_pad("src_%u")
    print("rtsp tee pad branch obtained")

    queue8_pad = queue10.get_static_pad("sink")

    tee_rtsp_pad.link(queue7_pad)
    tee_elg_pad.link(queue8_pad)

    # tee_render_pad=tee.get_request_pad("src_%u")
   
    # sink_pad=queue7.get_static_pad("sink")
    # tee_render_pad.link(sink_pad)

    # if is_aarch64():
    #     # nvosd.link(queue7)
    #     queue7.link(nvvidconv_postosd)
    #     nvvidconv_postosd.link(caps)
    #     caps.link(encoder)
    #     encoder.link(rtppay)
    #     rtppay.link(sink)

    # else:
    #     nvosd.link(queue7)
    #     queue7.link(nvvidconv_postosd)
    #     nvvidconv_postosd.link(caps)
    #     caps.link(encoder)
    #     encoder.link(rtppay)
    #     rtppay.link(sink)




    # create an event loop and feed gstreamer bus mesages to it
    loop = GObject.MainLoop()
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect ("message", bus_call, loop)

    # Start streaming
    # rtsp_port_num = 8554
    #
    # server = GstRtspServer.RTSPServer.new()
    # server.props.service = "%d" % rtsp_port_num
    # server.attach(None)
    #
    # factory = GstRtspServer.RTSPMediaFactory.new()
    # factory.set_launch(
    #     "( udpsrc name=pay0 port=%d buffer-size=524288 caps=\"application/x-rtp, media=video, clock-rate=90000, encoding-name=(string)%s, payload=96 \" )" % (
    #     updsink_port_num, codec))
    # factory.set_shared(True)
    # server.get_mount_points().add_factory("/ds-test", factory)

    # print("\n *** DeepStream: Launched RTSP Streaming at rtsp://localhost:%d/ds-test ***\n\n" % rtsp_port_num)
    print("\n *** DeepStream: Launched RTMP Streaming at rtmp://localhost:1935/live/file")

    nvanalytics_src_pad=nvanalytics.get_static_pad("src")
    if not nvanalytics_src_pad:
        sys.stderr.write(" Unable to get src pad \n")
    else:
        nvanalytics_src_pad.add_probe(Gst.PadProbeType.BUFFER, nvanalytics_src_pad_buffer_probe, 0)

    # # List the sources
    # print("Now playing...")
    # for i, source in enumerate(args):
    #     if (i != 0):
    #         print(i, ": ", source)

    print("Starting pipeline \n")
    # start play back and listed to events		
    pipeline.set_state(Gst.State.PLAYING)
    try:
        loop.run()
    except:
        pass
    # cleanup
    print("Exiting app\n")
    pipeline.set_state(Gst.State.NULL)

def parse_args():
    # parser = argparse.ArgumentParser(description='RTSP Output Sample Application Help ')
    # # parser.add_argument("-i", "--input", help="Path to input H264 elementry stream", required=True)
    # parser.add_argument("-c", "--codec", default="H264",
    #               help="RTSP Streaming Codec H264/H265 , default=H264", choices=['H264','H265'])
    # parser.add_argument("-b", "--bitrate", default=4000000,
    #               help="Set the encoding bitrate ", type=int)
    # Check input arguments
    # if len(sys.argv)==1:
    #     parser.print_help(sys.stderr)
    #     sys.exit(1)
    # args = parser.parse_args()
    global codec
    global bitrate
    # global stream_path
    codec = "H264"
    bitrate = 2000000
    # stream_path = args.input
    return 0

if __name__ == '__main__':
    parse_args()
    linecrossing_status = {"number": 0}
    sys.exit(main(sys.argv))

Car and car plate are different objects, each object has its own track id. So what you see in the picture is correct.

There is no relationship between the track ids of PGIE and SGIE.

There is no sub-object data structure in deepstream. So the SGIE detector output ordinary object as PGIE.

If you want to connect the result of SGIE(detector) and PGIE, the suggestion is to use “output-tensor-meta” with your SGIE(detector), so the object is under your control, you can generate our own user meta data to connect the PGIE object with SGIE object besides object meta.