Classifier_meta_list is none in deepstream_test2.py

I am using deepstream 6.1 samples docker to run the code of deepstream_test2.py from the github repo(GitHub - NVIDIA-AI-IOT/deepstream_python_apps: DeepStream SDK Python bindings and sample applications). I am able to use the given one in the repo, when I tried to train my custom model and tried using the same, the classifier_meta_list is none .
The change in the python file is instead of 3 sgie, I used only 1 sgie and the result is none all the time.

PGIE file remains same from the github.

SGIE file:

################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2019-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################

# Following properties are mandatory when engine files are not specified:
#   int8-calib-file(Only in INT8)
#   Caffemodel mandatory properties: model-file, proto-file, output-blob-names
#   UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names
#   ONNX: onnx-file
#
# Mandatory properties for detectors:
#   num-detected-classes
#
# Optional properties for detectors:
#   cluster-mode(Default=Group Rectangles), interval(Primary mode only, Default=0)
#   custom-lib-path,
#   parse-bbox-func-name
#
# Mandatory properties for classifiers:
#   classifier-threshold, is-classifier
#
# Optional properties for classifiers:
#   classifier-async-mode(Secondary mode only, Default=false)
#
# Optional properties in secondary mode:
#   operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),
#   input-object-min-width, input-object-min-height, input-object-max-width,
#   input-object-max-height
#
# Following properties are always recommended:
#   batch-size(Default=1)
#
# Other optional properties:
#   net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),
#   model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,
#   mean-file, gie-unique-id(Default=0), offsets, process-mode (Default=1 i.e. primary),
#   custom-lib-path, network-mode(Default=0 i.e FP32)
#
# The values in the config file are overridden by values set through GObject
# properties.

[property]
gpu-id=0
model-engine-file=/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test2/resnetmodel/model1.engine
tlt-encoded-model=/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test2/resnetmodel/model1.etlt
tlt-model-key=password
uff-input-blob-name=input_1
#uff-input-dims=3;224;224;1
int8-calib-file=/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test2/resnetmodel/caalib.bin
network-input-order=1
#infer-dims=3;224;224
batch-size=16
network-mode=2
num-detected-classes=12
input-object-min-width=64
input-object-min-height=64
model-color-format=1
gpu-id=0
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=0
is-classifier=1
output-blob-names=predictions/Softmax
classifier-async-mode=0
classifier-threshold=0.01
process-mode=2
#scaling-filter=0
#scaling-compute-hw=0

deepstream_test2.py

#!/usr/bin/env python3

################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2019-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################

import sys
sys.path.append('../')
import platform
import configparser

import os
no_display = True



print(os.getcwd())

import gi
gi.require_version('Gst', '1.0')
from gi.repository import GLib, Gst
from common.is_aarch_64 import is_aarch64
from common.bus_call import bus_call

import pyds

PGIE_CLASS_ID_VEHICLE = 0
PGIE_CLASS_ID_BICYCLE = 1
PGIE_CLASS_ID_PERSON = 2
PGIE_CLASS_ID_ROADSIGN = 3
past_tracking_meta=[0]

# def osd_sink_pad_buffer_probe(pad,info,u_data):
#     frame_number=0
#     #Intiallizing object counter with 0.
#     obj_counter = {
#         PGIE_CLASS_ID_VEHICLE:0,
#         PGIE_CLASS_ID_PERSON:0,
#         PGIE_CLASS_ID_BICYCLE:0,
#         PGIE_CLASS_ID_ROADSIGN:0
#     }
#     num_rects=0
#     gst_buffer = info.get_buffer()
#     if not gst_buffer:
#         print("Unable to get GstBuffer ")
#         return

#     # Retrieve batch metadata from the gst_buffer
#     # Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
#     # C address of gst_buffer as input, which is obtained with hash(gst_buffer)
#     batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
#     l_frame = batch_meta.frame_meta_list

#     while l_frame is not None:
#         try:
#             # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
#             # The casting is done by pyds.NvDsFrameMeta.cast()
#             # The casting also keeps ownership of the underlying memory
#             # in the C code, so the Python garbage collector will leave
#             # it alone.
#             frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
#         except StopIteration:
#             break

#         frame_number=frame_meta.frame_num
#         num_rects = frame_meta.num_obj_meta
#         l_obj=frame_meta.obj_meta_list
#         while l_obj is not None:
#             try:
#                 # Casting l_obj.data to pyds.NvDsObjectMeta
#                 obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
#             except StopIteration:
#                 break
#             obj_counter[obj_meta.class_id] += 1
#             try: 
#                 l_obj=l_obj.next
#             except StopIteration:
#                 break

#         # Acquiring a display meta object. The memory ownership remains in
#         # the C code so downstream plugins can still access it. Otherwise
#         # the garbage collector will claim it when this probe function exits.
#         display_meta=pyds.nvds_acquire_display_meta_from_pool(batch_meta)
#         display_meta.num_labels = 1
#         py_nvosd_text_params = display_meta.text_params[0]
#         # Setting display text to be shown on screen
#         # Note that the pyds module allocates a buffer for the string, and the
#         # memory will not be claimed by the garbage collector.
#         # Reading the display_text field here will return the C address of the
#         # allocated string. Use pyds.get_string() to get the string content.
#         py_nvosd_text_params.display_text = "Frame Number={} Number of Objects={} Person_count={} Vehicle count={}".format(frame_number, num_rects, obj_counter[PGIE_CLASS_ID_PERSON], obj_counter[PGIE_CLASS_ID_VEHICLE])

#         # Now set the offsets where the string should appear
#         py_nvosd_text_params.x_offset = 10
#         py_nvosd_text_params.y_offset = 12

#         # Font , font-color and font-size
#         py_nvosd_text_params.font_params.font_name = "Serif"
#         py_nvosd_text_params.font_params.font_size = 10
#         # set(red, green, blue, alpha); set to White
#         py_nvosd_text_params.font_params.font_color.set(1.0, 1.0, 1.0, 1.0)

#         # Text background color
#         py_nvosd_text_params.set_bg_clr = 1
#         # set(red, green, blue, alpha); set to Black
#         py_nvosd_text_params.text_bg_clr.set(0.0, 0.0, 0.0, 1.0)
#         # Using pyds.get_string() to get display_text as string
#         print(pyds.get_string(py_nvosd_text_params.display_text))
#         pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)
#         try:
#             l_frame=l_frame.next
#         except StopIteration:
#             break

#     print('the past tacking', past_tracking_meta[0])

#     #past traking meta data
#     if(past_tracking_meta[0]==1):
#         l_user=batch_meta.batch_user_meta_list
#         print('The l user', l_user)
#         while l_user is not None:
#             try:
#                 # Note that l_user.data needs a cast to pyds.NvDsUserMeta
#                 # The casting is done by pyds.NvDsUserMeta.cast()
#                 # The casting also keeps ownership of the underlying memory
#                 # in the C code, so the Python garbage collector will leave
#                 # it alone
#                 user_meta=pyds.NvDsUserMeta.cast(l_user.data)
#             except StopIteration:
#                 break
#             if(user_meta and user_meta.base_meta.meta_type==pyds.NvDsMetaType.NVDS_TRACKER_PAST_FRAME_META):
#                 try:
#                     # Note that user_meta.user_meta_data needs a cast to pyds.NvDsPastFrameObjBatch
#                     # The casting is done by pyds.NvDsPastFrameObjBatch.cast()
#                     # The casting also keeps ownership of the underlying memory
#                     # in the C code, so the Python garbage collector will leave
#                     # it alone
#                     pPastFrameObjBatch = pyds.NvDsPastFrameObjBatch.cast(user_meta.user_meta_data)
#                 except StopIteration:
#                     break
#                 for trackobj in pyds.NvDsPastFrameObjBatch.list(pPastFrameObjBatch):
#                     print("streamId=",trackobj.streamID)
#                     print("surfaceStreamID=",trackobj.surfaceStreamID)
#                     for pastframeobj in pyds.NvDsPastFrameObjStream.list(trackobj):
#                         print("numobj=",pastframeobj.numObj)
#                         print("uniqueId=",pastframeobj.uniqueId)
#                         print("classId=",pastframeobj.classId)
#                         print("objLabel=",pastframeobj.objLabel)
#                         for objlist in pyds.NvDsPastFrameObjList.list(pastframeobj):
#                             print('frameNum:', objlist.frameNum)
#                             print('tBbox.left:', objlist.tBbox.left)
#                             print('tBbox.width:', objlist.tBbox.width)
#                             print('tBbox.top:', objlist.tBbox.top)
#                             print('tBbox.right:', objlist.tBbox.height)
#                             print('confidence:', objlist.confidence)
#                             print('age:', objlist.age)
#             try:
#                 l_user=l_user.next
#             except StopIteration:
#                 break
#     return Gst.PadProbeReturn.OK	


def osd_sink_pad_buffer_probe(pad,info,u_data):
    frame_number=0
    #Intiallizing object counter with 0.
    obj_counter = {
        PGIE_CLASS_ID_VEHICLE:0,
        PGIE_CLASS_ID_PERSON:0,
        PGIE_CLASS_ID_BICYCLE:0,
        PGIE_CLASS_ID_ROADSIGN:0
    }
    num_rects=0

    gst_buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer ")
        return

    # Retrieve batch metadata from the gst_buffer
    # Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
    # C address of gst_buffer as input, which is obtained with hash(gst_buffer)
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    l_frame = batch_meta.frame_meta_list
    while l_frame is not None:
        try:
            # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
            # The casting is done by pyds.glist_get_nvds_frame_meta()
            # The casting also keeps ownership of the underlying memory
            # in the C code, so the Python garbage collector will leave
            # it alone.
            #frame_meta = pyds.glist_get_nvds_frame_meta(l_frame.data)
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break

        frame_number=frame_meta.frame_num
        num_rects = frame_meta.num_obj_meta
        l_obj=frame_meta.obj_meta_list

        print(l_obj)

        while l_obj is not None:
            try:
                # Casting l_obj.data to pyds.NvDsObjectMeta
                #obj_meta=pyds.glist_get_nvds_object_meta(l_obj.data)
                obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
                if (obj_meta.class_id == PGIE_CLASS_ID_VEHICLE):
                    cls_obj = obj_meta.classifier_meta_list
                    print('cls obj is',cls_obj)
                    while cls_obj is not None:
                        print('The cls is not none')
                        try:

                            cls_meta=pyds.NvDsClassifierMeta.cast(cls_obj.data)
                            print('the labels info', cls_meta.label_info_list)
                            print('component id',cls_meta.unique_component_id)
                            if cls_meta.unique_component_id==2:
                                print()
                                cls_meta_lbl = cls_meta.label_info_list
                                while cls_meta_lbl is not None:
                                    try:
                                        cls_meta_lbl_info=pyds.NvDsLabelInfo.cast(cls_meta_lbl.data)
                                        result_str = str(cls_meta_lbl_info.result_label)#.tobytes().decode('iso-8859-1')) 
                                        print("result_decode", result_str)
                                        print("result_strip:", result_str.split('\x00'))
                                        print("result_one:", result_str.split('\x00')[0])
                                        print("-----------------------------------------")
                                        print("result_label:", cls_meta_lbl_info.result_label)

                                    except StopIteration:
                                        break
                        except StopIteration:
                            break
            except StopIteration:
                break

            if obj_meta.class_id in obj_counter:
                pass

            else:
                obj_counter[obj_meta.class_id] = 0

            obj_counter[obj_meta.class_id] += 1
            obj_meta.rect_params.border_color.set(0.0, 0.0, 1.0, 0.0)
            try: 
                l_obj=l_obj.next
            except StopIteration:
                break

        # Acquiring a display meta object. The memory ownership remains in
        # the C code so downstream plugins can still access it. Otherwise
        # the garbage collector will claim it when this probe function exits.
        display_meta=pyds.nvds_acquire_display_meta_from_pool(batch_meta)
        display_meta.num_labels = 1
        py_nvosd_text_params = display_meta.text_params[0]
        # Setting display text to be shown on screen
        # Note that the pyds module allocates a buffer for the string, and the
        # memory will not be claimed by the garbage collector.
        # Reading the display_text field here will return the C address of the
        # allocated string. Use pyds.get_string() to get the string content.
        py_nvosd_text_params.display_text = "Frame Number={} Number of Objects={} Vehicle_count={} Person_count={}".format(frame_number, num_rects, obj_counter[PGIE_CLASS_ID_VEHICLE], obj_counter[PGIE_CLASS_ID_PERSON])

        # Now set the offsets where the string should appear
        py_nvosd_text_params.x_offset = 10
        py_nvosd_text_params.y_offset = 12

        # Font , font-color and font-size
        py_nvosd_text_params.font_params.font_name = "Serif"
        py_nvosd_text_params.font_params.font_size = 10
        # set(red, green, blue, alpha); set to White
        py_nvosd_text_params.font_params.font_color.set(1.0, 1.0, 1.0, 1.0)

        # Text background color
        py_nvosd_text_params.set_bg_clr = 1
        # set(red, green, blue, alpha); set to Black
        py_nvosd_text_params.text_bg_clr.set(0.0, 0.0, 0.0, 1.0)
        # Using pyds.get_string() to get display_text as string
        print(pyds.get_string(py_nvosd_text_params.display_text))
        pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)

        try:
            l_frame=l_frame.next
        except StopIteration:
            break
			
    return Gst.PadProbeReturn.OK	

def main(args):
    # Check input arguments
    if(len(args)<2):
        sys.stderr.write("usage: %s <h264_elementary_stream> [0/1]\n" % args[0])
        sys.exit(1)

    # Standard GStreamer initialization
    if(len(args)==3):
        past_tracking_meta[0]=int(args[2])
    Gst.init(None)

    # Create gstreamer elements
    # Create Pipeline element that will form a connection of other elements
    print("Creating Pipeline \n ")
    pipeline = Gst.Pipeline()

    if not pipeline:
        sys.stderr.write(" Unable to create Pipeline \n")

    # Source element for reading from the file
    print("Creating Source \n ")
    source = Gst.ElementFactory.make("filesrc", "file-source")
    if not source:
        sys.stderr.write(" Unable to create Source \n")

    # Since the data format in the input file is elementary h264 stream,
    # we need a h264parser
    print("Creating H264Parser \n")
    h264parser = Gst.ElementFactory.make("h264parse", "h264-parser")
    if not h264parser:
        sys.stderr.write(" Unable to create h264 parser \n")

    # Use nvdec_h264 for hardware accelerated decode on GPU
    print("Creating Decoder \n")
    decoder = Gst.ElementFactory.make("nvv4l2decoder", "nvv4l2-decoder")
    if not decoder:
        sys.stderr.write(" Unable to create Nvv4l2 Decoder \n")

    # Create nvstreammux instance to form batches from one or more sources.
    streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
    if not streammux:
        sys.stderr.write(" Unable to create NvStreamMux \n")

    # Use nvinfer to run inferencing on decoder's output,
    # behaviour of inferencing is set through config file
    pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
    if not pgie:
        sys.stderr.write(" Unable to create pgie \n")

    tracker = Gst.ElementFactory.make("nvtracker", "tracker")
    if not tracker:
        sys.stderr.write(" Unable to create tracker \n")

    sgie1 = Gst.ElementFactory.make("nvinfer", "secondary1-nvinference-engine")
    if not sgie1:
        sys.stderr.write(" Unable to make sgie1 \n")


    sgie2 = Gst.ElementFactory.make("nvinfer", "secondary2-nvinference-engine")
    if not sgie2:
        sys.stderr.write(" Unable to make sgie2 \n")

    sgie3 = Gst.ElementFactory.make("nvinfer", "secondary3-nvinference-engine")
    if not sgie3:
        sys.stderr.write(" Unable to make sgie3 \n")

    nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
    if not nvvidconv:
        sys.stderr.write(" Unable to create nvvidconv \n")

    # Create OSD to draw on the converted RGBA buffer
    nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")

    if not nvosd:
        sys.stderr.write(" Unable to create nvosd \n")

    if no_display:
        print("Creating Fakesink \n")
        sink = Gst.ElementFactory.make("fakesink", "fakesink")
        sink.set_property('enable-last-sample', 0)
        sink.set_property('sync', 0)
    else:
        if(is_aarch64()):
            print("Creating transform \n ")
            transform=Gst.ElementFactory.make("nvegltransform", "nvegl-transform")
            if not transform:
                sys.stderr.write(" Unable to create transform \n")
        print("Creating EGLSink \n")
        sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")    
    
    if not sink:
        sys.stderr.write(" Unable to create egl sink \n")

    print("Playing file %s " %args[1])
    source.set_property('location', args[1])
    streammux.set_property('width', 1920)
    streammux.set_property('height', 1080)
    streammux.set_property('batch-size', 1)
    streammux.set_property('batched-push-timeout', 4000000)

    #Set properties of pgie and sgie
    pgie.set_property('config-file-path', "dstest2_pgie_config.txt")
    sgie1.set_property('config-file-path', "dstest2_sgie1_config.txt")
    #sgie2.set_property('config-file-path', "dstest2_sgie2_config.txt")
    #sgie3.set_property('config-file-path', "dstest2_sgie3_config.txt")

    #Set properties of tracker
    config = configparser.ConfigParser()
    config.read('dstest2_tracker_config.txt')
    config.sections()

    print('config', config.keys())

    for key in config['tracker']:
        if key == 'tracker-width' :
            tracker_width = config.getint('tracker', key)
            tracker.set_property('tracker-width', tracker_width)
        if key == 'tracker-height' :
            tracker_height = config.getint('tracker', key)
            tracker.set_property('tracker-height', tracker_height)
        if key == 'gpu-id' :
            tracker_gpu_id = config.getint('tracker', key)
            tracker.set_property('gpu_id', tracker_gpu_id)
        if key == 'll-lib-file' :
            tracker_ll_lib_file = config.get('tracker', key)
            tracker.set_property('ll-lib-file', tracker_ll_lib_file)
        if key == 'll-config-file' :
            tracker_ll_config_file = config.get('tracker', key)
            tracker.set_property('ll-config-file', tracker_ll_config_file)
        if key == 'enable-batch-process' :
            tracker_enable_batch_process = config.getint('tracker', key)
            tracker.set_property('enable_batch_process', tracker_enable_batch_process)
        if key == 'enable-past-frame' :
            tracker_enable_past_frame = config.getint('tracker', key)
            tracker.set_property('enable_past_frame', tracker_enable_past_frame)

    print("Adding elements to Pipeline \n")
    pipeline.add(source)
    pipeline.add(h264parser)
    pipeline.add(decoder)
    pipeline.add(streammux)
    pipeline.add(pgie)
    pipeline.add(tracker)
    pipeline.add(sgie1)
    #pipeline.add(sgie2)
    #pipeline.add(sgie3)
    pipeline.add(nvvidconv)
    pipeline.add(nvosd)
    pipeline.add(sink)
    if is_aarch64():
        pipeline.add(transform)

    # we link the elements together
    # file-source -> h264-parser -> nvh264-decoder ->
    # nvinfer -> nvvidconv -> nvosd -> video-renderer
    print("Linking elements in the Pipeline \n")
    source.link(h264parser)
    h264parser.link(decoder)

    sinkpad = streammux.get_request_pad("sink_0")
    if not sinkpad:
        sys.stderr.write(" Unable to get the sink pad of streammux \n")
    srcpad = decoder.get_static_pad("src")
    if not srcpad:
        sys.stderr.write(" Unable to get source pad of decoder \n")
    srcpad.link(sinkpad)
    streammux.link(pgie)
    pgie.link(tracker)
    tracker.link(sgie1)
    sgie1.link(nvvidconv)
    #sgie2.link(sgie3)
    #sgie3.link(nvvidconv)
    nvvidconv.link(nvosd)
    if is_aarch64():
        nvosd.link(transform)
        transform.link(sink)
    else:
        nvosd.link(sink)


    # create and event loop and feed gstreamer bus mesages to it
    loop = GLib.MainLoop()

    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect ("message", bus_call, loop)

    # Lets add probe to get informed of the meta data generated, we add probe to
    # the sink pad of the osd element, since by that time, the buffer would have
    # had got all the metadata.
    osdsinkpad = nvosd.get_static_pad("sink")
    if not osdsinkpad:
        sys.stderr.write(" Unable to get sink pad of nvosd \n")
    osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)


    print("Starting pipeline \n")
    
    # start play back and listed to events
    pipeline.set_state(Gst.State.PLAYING)
    try:
      loop.run()
    except:
      pass

    # cleanup
    pipeline.set_state(Gst.State.NULL)

if __name__ == '__main__':
    sys.exit(main(sys.argv))

Have you checked the output of PGIE? Can it produce the expected detections?

yes it does @yingliu

if I try to use one of the sgie from the test2 folder the entire code is working fine, I am not sure if I have written proper structure of sgie for the custom model, @yingliu

@yingliu any update regarding the issue.

@kuppasaisriteja what’s your model’s type, detector or classifier? If your model is a classification model, you should set network-type = 1 in your config file.

I have changed the script to, yet i get the same error.
@yuweiw

gpu-id=0
net-scale-factor=1
network-type = 1
model-engine-file=/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test2-new/deploy/model.engine
#tlt-encoded-model=/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test2-new/deploy/model.etlt
tlt-model-key=password
uff-input-blob-name=input_1
#uff-input-dims=3;224;224;1
int8-calib-file=/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test2-new/deploy/calib.bin
network-input-order=1
#infer-dims=3;224;224
batch-size=16
network-mode=2
num-detected-classes=2
input-object-min-width=10
input-object-min-height=10
model-color-format=1
gpu-id=0
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=0
output-blob-names=predictions/Softmax
classifier-async-mode=0
classifier-threshold=0.01
process-mode=2
#scaling-filter=0
#scaling-compute-hw=0

Could you show the full log? Generally speaking, our log can show the input paras and output paras of the model.

Log file of the generated python file @yuweiw @yingliu

python3 deepstream_test_2.py /opt/nvidia/deepstream/deepstream-6.1/samples/streams/sample_720p.h264 1
/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test2-new
Creating Pipeline 
 
Creating Source 
 
Creating H264Parser 

Creating Decoder 

Creating Fakesink 

Playing file /opt/nvidia/deepstream/deepstream-6.1/samples/streams/sample_720p.h264 
config KeysView(<configparser.ConfigParser object at 0x7f9e31f66400>)
Adding elements to Pipeline 

Linking elements in the Pipeline 

Starting pipeline 

0:00:03.185126884  2128      0x2dff210 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 2]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test2-new/deploy/model.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input_1         3x224x224       
1   OUTPUT kFLOAT predictions/Softmax 2x1x1           

0:00:03.214340723  2128      0x2dff210 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2003> [UID = 2]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test2-new/deploy/model.engine
0:00:03.215812312  2128      0x2dff210 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<secondary1-nvinference-engine> [UID 2]: Load new model:dstest2_sgie1_config.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvMultiObjectTracker] Initialized
Deserialize yoloLayer plugin: yolo
0:00:04.013524800  2128      0x2dff210 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test2-new/model_b1_gpu0_fp32.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 5
0   INPUT  kFLOAT data            3x640x640       
1   OUTPUT kFLOAT num_detections  1               
2   OUTPUT kFLOAT detection_boxes 25200x4         
3   OUTPUT kFLOAT detection_scores 25200           
4   OUTPUT kFLOAT detection_classes 25200           

0:00:04.040142368  2128      0x2dff210 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2003> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test2-new/model_b1_gpu0_fp32.engine
0:00:04.042595664  2128      0x2dff210 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:primary_pgie_config_yolo.txt sucessfully
^C[NvMultiObjectTracker] De-initialized
root@2310d21bada0:/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test2-new# python3 deepstream_test_2.py /opt/nvidia/deepstream/deepstream-6.1/samples/streams/sample_720p.h264 1
/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test2-new
Creating Pipeline 
 
Creating Source 
 
Creating H264Parser 

Creating Decoder 

Creating Fakesink 

Playing file /opt/nvidia/deepstream/deepstream-6.1/samples/streams/sample_720p.h264 
config KeysView(<configparser.ConfigParser object at 0x7fd75cd12400>)
Adding elements to Pipeline 

Linking elements in the Pipeline 

Starting pipeline 

0:00:03.107142753  2148      0x22cc010 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 2]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test2-new/deploy/model.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input_1         3x224x224       
1   OUTPUT kFLOAT predictions/Softmax 2x1x1           

0:00:03.137074288  2148      0x22cc010 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2003> [UID = 2]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test2-new/deploy/model.engine
0:00:03.138806694  2148      0x22cc010 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<secondary1-nvinference-engine> [UID 2]: Load new model:dstest2_sgie1_config.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvMultiObjectTracker] Initialized
Deserialize yoloLayer plugin: yolo
0:00:03.931824297  2148      0x22cc010 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test2-new/model_b1_gpu0_fp32.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 5
0   INPUT  kFLOAT data            3x640x640       
1   OUTPUT kFLOAT num_detections  1               
2   OUTPUT kFLOAT detection_boxes 25200x4         
3   OUTPUT kFLOAT detection_scores 25200           
4   OUTPUT kFLOAT detection_classes 25200           

0:00:03.959648902  2148      0x22cc010 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2003> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test2-new/model_b1_gpu0_fp32.engine
0:00:03.961930984  2148      0x22cc010 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:primary_pgie_config_yolo.txt sucessfully
None
Frame Number=0 Number of Objects=0 Vehicle_count=0 Person_count=0
None
Frame Number=1 Number of Objects=0 Vehicle_count=0 Person_count=0
None
Frame Number=2 Number of Objects=0 Vehicle_count=0 Person_count=0
<pyds.GList object at 0x7fd75b469070>
<pyds.NvDsComp_BboxInfo object at 0x7fd75bbc59f0>
cls obj is None
<pyds.NvDsComp_BboxInfo object at 0x7fd75bbf8570>
cls obj is None
<pyds.NvDsComp_BboxInfo object at 0x7fd75bbcc2f0>
cls obj is None
<pyds.NvDsComp_BboxInfo object at 0x7fd75bc33e70>
cls obj is None
Frame Number=3 Number of Objects=4 Vehicle_count=4 Person_count=0
<pyds.GList object at 0x7fd75b45ac30>
<pyds.NvDsComp_BboxInfo object at 0x7fd75bbb8f70>
cls obj is None
<pyds.NvDsComp_BboxInfo object at 0x7fd75b483370>
cls obj is None
<pyds.NvDsComp_BboxInfo object at 0x7fd75b469070>
cls obj is None
<pyds.NvDsComp_BboxInfo object at 0x7fd75bbc59f0>
cls obj is None

Does this topic describe the same issue as Deserialized backend context model.etlt_b16_gpu0_fp16.engine failed to match config params - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums ? We may focus on one of them if both topics share the same issue.

Okay, I have generated an engine file using nvidia tao export command, but the generated engine file shows this error, when I give the .etlt file and cal_trt.bin file then it leads to the above post error in process of generating a new engine file. Anyway I am good if one of them gets resolved.

In this topic the PGIE is running well (as you mentioned PGIE with the model trained by TAO has correct output), the issue here is the classifier has no output.
In the topic-2 it says the detector engine cannot be created as “backend can not support dims:224x224x3”, it seems the PGIE with your model cannot start correctly.
So topic-2 is solved and thus you hit issue (empty classifier_meta_list) in this topic? Thanks to confirm.

I am not sure if the problem is with pgie, I am assuming its with the sgie all this time. I have just uncommented lines in sgie and I have 2 different errors. @yingliu @yuweiw

Can you summarize the latest status? your latest config and the latest errors/output.
Let’s focus on this topic.

Code:

pgie tracker sgie,

the pgie i am using a from deepstreamapp which is working fine, I am not getting the output from the sgie which is a classifer that detects 2 classes, car/truck @yingliu @yuweiw

SGIE File:

################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2019-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################

# Following properties are mandatory when engine files are not specified:
#   int8-calib-file(Only in INT8)
#   Caffemodel mandatory properties: model-file, proto-file, output-blob-names
#   UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names
#   ONNX: onnx-file
#
# Mandatory properties for detectors:
#   num-detected-classes
#
# Optional properties for detectors:
#   cluster-mode(Default=Group Rectangles), interval(Primary mode only, Default=0)
#   custom-lib-path,
#   parse-bbox-func-name
#
# Mandatory properties for classifiers:
#   classifier-threshold, is-classifier
#
# Optional properties for classifiers:
#   classifier-async-mode(Secondary mode only, Default=false)
#
# Optional properties in secondary mode:
#   operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),
#   input-object-min-width, input-object-min-height, input-object-max-width,
#   input-object-max-height
#
# Following properties are always recommended:
#   batch-size(Default=1)
#
# Other optional properties:
#   net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),
#   model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,
#   mean-file, gie-unique-id(Default=0), offsets, process-mode (Default=1 i.e. primary),
#   custom-lib-path, network-mode(Default=0 i.e FP32)
#
# The values in the config file are overridden by values set through GObject
# properties.

[property]
gpu-id=0
net-scale-factor=1
network-type = 1
model-engine-file=/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test2-new/deploy/model.engine
#tlt-encoded-model=/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test2-new/deploy/model.etlt
tlt-model-key=password
uff-input-blob-name=input_1
#uff-input-dims=3;224;224;1
int8-calib-file=/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test2-new/deploy/calib.bin
network-input-order=1
#infer-dims=3;224;224
batch-size=24
network-mode=2
num-detected-classes=2
input-object-min-width=10
input-object-min-height=10
model-color-format=1
gpu-id=0
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=0
output-blob-names=predictions/Softmax
classifier-async-mode=0
classifier-threshold=0.01
process-mode=2
#scaling-filter=0
#scaling-compute-hw=0

Python Code:

#!/usr/bin/env python3

################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2019-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################

import sys
sys.path.append('../')
import platform
import configparser

import os
no_display = True



print(os.getcwd())

import gi
gi.require_version('Gst', '1.0')
from gi.repository import GLib, Gst
from common.is_aarch_64 import is_aarch64
from common.bus_call import bus_call

import pyds

PGIE_CLASS_ID_VEHICLE = 0
PGIE_CLASS_ID_BICYCLE = 1
PGIE_CLASS_ID_PERSON = 2
PGIE_CLASS_ID_ROADSIGN = 3
past_tracking_meta=[0]



def osd_sink_pad_buffer_probe(pad,info,u_data):
    frame_number=0
    #Intiallizing object counter with 0.
    obj_counter = {
        PGIE_CLASS_ID_VEHICLE:0,
        PGIE_CLASS_ID_PERSON:0,
        PGIE_CLASS_ID_BICYCLE:0,
        PGIE_CLASS_ID_ROADSIGN:0
    }
    num_rects=0

    gst_buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer ")
        return

    # Retrieve batch metadata from the gst_buffer
    # Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
    # C address of gst_buffer as input, which is obtained with hash(gst_buffer)
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    l_frame = batch_meta.frame_meta_list
    while l_frame is not None:
        try:
            # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
            # The casting is done by pyds.glist_get_nvds_frame_meta()
            # The casting also keeps ownership of the underlying memory
            # in the C code, so the Python garbage collector will leave
            # it alone.
            #frame_meta = pyds.glist_get_nvds_frame_meta(l_frame.data)
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break

        frame_number=frame_meta.frame_num
        num_rects = frame_meta.num_obj_meta
        l_obj=frame_meta.obj_meta_list

        print(l_obj)

        while l_obj is not None:
            try:
                # Casting l_obj.data to pyds.NvDsObjectMeta
                #obj_meta=pyds.glist_get_nvds_object_meta(l_obj.data)
                obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
                print(obj_meta.detector_bbox_info)


                if (obj_meta.class_id == PGIE_CLASS_ID_VEHICLE):

                    cls_obj = obj_meta.classifier_meta_list
                    print('cls obj is',cls_obj)

                    while cls_obj is not None:
                        print('The cls is not none')
                        try:

                            cls_meta=pyds.NvDsClassifierMeta.cast(cls_obj.data)
                            print('the labels info', cls_meta.label_info_list)
                            print('component id',cls_meta.unique_component_id)

                            if cls_meta.unique_component_id==2:
                                print()
                                cls_meta_lbl = cls_meta.label_info_list

                                while cls_meta_lbl is not None:
                                    try:
                                        cls_meta_lbl_info=pyds.NvDsLabelInfo.cast(cls_meta_lbl.data)
                                        result_str = str(cls_meta_lbl_info.result_label)#.tobytes().decode('iso-8859-1')) 
                                        print("result_decode", result_str)
                                        print("result_strip:", result_str.split('\x00'))
                                        print("result_one:", result_str.split('\x00')[0])
                                        print("-----------------------------------------")
                                        print("result_label:", cls_meta_lbl_info.result_label)

                                    except StopIteration:
                                        break
                        except StopIteration:
                            break
            except StopIteration:
                break

            if obj_meta.class_id in obj_counter:
                pass

            else:
                obj_counter[obj_meta.class_id] = 0

            obj_counter[obj_meta.class_id] += 1
            obj_meta.rect_params.border_color.set(0.0, 0.0, 1.0, 0.0)
            try: 
                l_obj=l_obj.next
            except StopIteration:
                break

        # Acquiring a display meta object. The memory ownership remains in
        # the C code so downstream plugins can still access it. Otherwise
        # the garbage collector will claim it when this probe function exits.
        display_meta=pyds.nvds_acquire_display_meta_from_pool(batch_meta)
        display_meta.num_labels = 1
        py_nvosd_text_params = display_meta.text_params[0]
        # Setting display text to be shown on screen
        # Note that the pyds module allocates a buffer for the string, and the
        # memory will not be claimed by the garbage collector.
        # Reading the display_text field here will return the C address of the
        # allocated string. Use pyds.get_string() to get the string content.
        py_nvosd_text_params.display_text = "Frame Number={} Number of Objects={} Vehicle_count={} Person_count={}".format(frame_number, num_rects, obj_counter[PGIE_CLASS_ID_VEHICLE], obj_counter[PGIE_CLASS_ID_PERSON])

        # Now set the offsets where the string should appear
        py_nvosd_text_params.x_offset = 10
        py_nvosd_text_params.y_offset = 12

        # Font , font-color and font-size
        py_nvosd_text_params.font_params.font_name = "Serif"
        py_nvosd_text_params.font_params.font_size = 10
        # set(red, green, blue, alpha); set to White
        py_nvosd_text_params.font_params.font_color.set(1.0, 1.0, 1.0, 1.0)

        # Text background color
        py_nvosd_text_params.set_bg_clr = 1
        # set(red, green, blue, alpha); set to Black
        py_nvosd_text_params.text_bg_clr.set(0.0, 0.0, 0.0, 1.0)
        # Using pyds.get_string() to get display_text as string
        print(pyds.get_string(py_nvosd_text_params.display_text))
        pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)

        try:
            l_frame=l_frame.next
        except StopIteration:
            break
			
    return Gst.PadProbeReturn.OK	

def main(args):
    # Check input arguments
    if(len(args)<2):
        sys.stderr.write("usage: %s <h264_elementary_stream> [0/1]\n" % args[0])
        sys.exit(1)

    # Standard GStreamer initialization
    if(len(args)==3):
        past_tracking_meta[0]=int(args[2])
    Gst.init(None)

    # Create gstreamer elements
    # Create Pipeline element that will form a connection of other elements
    print("Creating Pipeline \n ")
    pipeline = Gst.Pipeline()

    if not pipeline:
        sys.stderr.write(" Unable to create Pipeline \n")

    # Source element for reading from the file
    print("Creating Source \n ")
    source = Gst.ElementFactory.make("filesrc", "file-source")
    if not source:
        sys.stderr.write(" Unable to create Source \n")

    # Since the data format in the input file is elementary h264 stream,
    # we need a h264parser
    print("Creating H264Parser \n")
    h264parser = Gst.ElementFactory.make("h264parse", "h264-parser")
    if not h264parser:
        sys.stderr.write(" Unable to create h264 parser \n")

    # Use nvdec_h264 for hardware accelerated decode on GPU
    print("Creating Decoder \n")
    decoder = Gst.ElementFactory.make("nvv4l2decoder", "nvv4l2-decoder")
    if not decoder:
        sys.stderr.write(" Unable to create Nvv4l2 Decoder \n")

    # Create nvstreammux instance to form batches from one or more sources.
    streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
    if not streammux:
        sys.stderr.write(" Unable to create NvStreamMux \n")

    # Use nvinfer to run inferencing on decoder's output,
    # behaviour of inferencing is set through config file
    pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
    if not pgie:
        sys.stderr.write(" Unable to create pgie \n")

    tracker = Gst.ElementFactory.make("nvtracker", "tracker")
    if not tracker:
        sys.stderr.write(" Unable to create tracker \n")

    sgie1 = Gst.ElementFactory.make("nvinfer", "secondary1-nvinference-engine")
    if not sgie1:
        sys.stderr.write(" Unable to make sgie1 \n")


    # sgie2 = Gst.ElementFactory.make("nvinfer", "secondary2-nvinference-engine")
    # if not sgie2:
    #     sys.stderr.write(" Unable to make sgie2 \n")

    # sgie3 = Gst.ElementFactory.make("nvinfer", "secondary3-nvinference-engine")
    # if not sgie3:
    #     sys.stderr.write(" Unable to make sgie3 \n")

    nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
    if not nvvidconv:
        sys.stderr.write(" Unable to create nvvidconv \n")

    # Create OSD to draw on the converted RGBA buffer
    nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")

    if not nvosd:
        sys.stderr.write(" Unable to create nvosd \n")

    if no_display:
        print("Creating Fakesink \n")
        sink = Gst.ElementFactory.make("fakesink", "fakesink")
        sink.set_property('enable-last-sample', 0)
        sink.set_property('sync', 0)
    else:
        if(is_aarch64()):
            print("Creating transform \n ")
            transform=Gst.ElementFactory.make("nvegltransform", "nvegl-transform")
            if not transform:
                sys.stderr.write(" Unable to create transform \n")
        print("Creating EGLSink \n")
        sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")    
    
    if not sink:
        sys.stderr.write(" Unable to create egl sink \n")

    print("Playing file %s " %args[1])
    source.set_property('location', args[1])
    streammux.set_property('width', 1920)
    streammux.set_property('height', 1080)
    streammux.set_property('batch-size', 1)
    streammux.set_property('batched-push-timeout', 4000000)

    #Set properties of pgie and sgie
    pgie.set_property('config-file-path', "primary_pgie_config_yolo.txt")
    sgie1.set_property('config-file-path', "dstest2_sgie1_config.txt")
    #sgie2.set_property('config-file-path', "dstest2_sgie2_config.txt")
    #sgie3.set_property('config-file-path', "dstest2_sgie3_config.txt")

    #Set properties of tracker
    config = configparser.ConfigParser()
    config.read('dstest2_tracker_config.txt')
    config.sections()

    print('config', config.keys())

    for key in config['tracker']:
        if key == 'tracker-width' :
            tracker_width = config.getint('tracker', key)
            tracker.set_property('tracker-width', tracker_width)
        if key == 'tracker-height' :
            tracker_height = config.getint('tracker', key)
            tracker.set_property('tracker-height', tracker_height)
        if key == 'gpu-id' :
            tracker_gpu_id = config.getint('tracker', key)
            tracker.set_property('gpu_id', tracker_gpu_id)
        if key == 'll-lib-file' :
            tracker_ll_lib_file = config.get('tracker', key)
            tracker.set_property('ll-lib-file', tracker_ll_lib_file)
        if key == 'll-config-file' :
            tracker_ll_config_file = config.get('tracker', key)
            tracker.set_property('ll-config-file', tracker_ll_config_file)
        if key == 'enable-batch-process' :
            tracker_enable_batch_process = config.getint('tracker', key)
            tracker.set_property('enable_batch_process', tracker_enable_batch_process)
        if key == 'enable-past-frame' :
            tracker_enable_past_frame = config.getint('tracker', key)
            tracker.set_property('enable_past_frame', tracker_enable_past_frame)

    print("Adding elements to Pipeline \n")
    pipeline.add(source)
    pipeline.add(h264parser)
    pipeline.add(decoder)
    pipeline.add(streammux)
    pipeline.add(pgie)
    pipeline.add(tracker)
    pipeline.add(sgie1)
    #pipeline.add(sgie2)
    #pipeline.add(sgie3)
    pipeline.add(nvvidconv)
    pipeline.add(nvosd)
    pipeline.add(sink)
    if is_aarch64():
        pipeline.add(transform)

    # we link the elements together
    # file-source -> h264-parser -> nvh264-decoder ->
    # nvinfer -> nvvidconv -> nvosd -> video-renderer
    print("Linking elements in the Pipeline \n")
    source.link(h264parser)
    h264parser.link(decoder)

    sinkpad = streammux.get_request_pad("sink_0")
    if not sinkpad:
        sys.stderr.write(" Unable to get the sink pad of streammux \n")
    srcpad = decoder.get_static_pad("src")
    if not srcpad:
        sys.stderr.write(" Unable to get source pad of decoder \n")
    srcpad.link(sinkpad)
    streammux.link(pgie)
    pgie.link(tracker)
    tracker.link(sgie1)
    sgie1.link(nvvidconv)
    #sgie2.link(sgie3)
    #sgie3.link(nvvidconv)
    nvvidconv.link(nvosd)
    if is_aarch64():
        nvosd.link(transform)
        transform.link(sink)
    else:
        nvosd.link(sink)


    # create and event loop and feed gstreamer bus mesages to it
    loop = GLib.MainLoop()

    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect ("message", bus_call, loop)

    # Lets add probe to get informed of the meta data generated, we add probe to
    # the sink pad of the osd element, since by that time, the buffer would have
    # had got all the metadata.
    osdsinkpad = nvosd.get_static_pad("sink")
    if not osdsinkpad:
        sys.stderr.write(" Unable to get sink pad of nvosd \n")
    osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)


    print("Starting pipeline \n")
    
    # start play back and listed to events
    pipeline.set_state(Gst.State.PLAYING)
    try:
      loop.run()
    except:
      pass

    # cleanup
    pipeline.set_state(Gst.State.NULL)

if __name__ == '__main__':
    sys.exit(main(sys.argv))

Config File For training the resnet

model_config {
  # Model Architecture can be chosen from:
  # ['resnet', 'vgg', 'googlenet', 'alexnet']
  arch: "resnet"
  n_layers: 18
  use_batch_norm: True
  use_bias: False
  all_projections: False
  use_pooling: True
  retain_head: True
  resize_interpolation_method: BICUBIC
  # if you want to use the pretrained model,
  # image size should be "3,224,224"
  # otherwise, it can be "3, X, Y", where X,Y >= 16
  input_image_size: "3,224,224"
}
train_config {
  train_dataset_path: "car_truck_classification_dataset/train"
  val_dataset_path: "car_truck_classification_dataset/val"
#   pretrained_model_path: "/path/to/your/pretrained/model"
  # Only ['sgd', 'adam'] are supported for optimizer
  optimizer {
      sgd {
      lr: 0.01
      decay: 0.0
      momentum: 0.9
      nesterov: False
      }
  }
  batch_size_per_gpu: 24
  n_epochs: 200
  # Number of CPU cores for loading data
  n_workers: 16
  # regularizer
  reg_config {
      # regularizer type can be "L1", "L2" or "None".
      type: "L2"
      # if the type is not "None",
      # scope can be either "Conv2D" or "Dense" or both.
      scope: "Conv2D,Dense"
      # 0 < weight decay < 1
      weight_decay: 0.000015
  }
  # learning_rate
  lr_config {
      cosine {
      learning_rate: 0.04
      soft_start: 0.0
      }
  }
  enable_random_crop: True
  enable_center_crop: True
  enable_color_augmentation: True
  mixup_alpha: 0.2
  label_smoothing: 0.1
  preprocess_mode: "caffe"
  image_mean {
    key: 'b'
    value: 103.9
  }
  image_mean {
    key: 'g'
    value: 116.8
  }
  image_mean {
    key: 'r'
    value: 123.7
  }
}
eval_config {
  eval_dataset_path: "car_truck_classification_dataset/val"
  model_path: "weights/resnet_002_pruned.tlt"
  top_k: 3 
  batch_size: 256
  n_workers: 8
  enable_center_crop: True
}

Commands to convert model from resnet18 tlt to etlt:


tao classification train -e path/resnet18_train.cfg \
-k password \
-r path/models


tao classification prune -m path/models/weights/resnet_002.tlt \
                         -o path/models/weights/resnet_002_pruned.tlt \
                         -eq union \
                         -pth 0.2 \
                         -k password



tao classification evaluate -e path/resnet18_inference.cfg \
                            -k password 




tao classification calibration_tensorfile -e path/resnet18_inference.cfg \
                                          -o path/deploy/calib.txt \
                                          -m 24



tao classification export -m path/models/weights/resnet_002_pruned.tlt \
                          -k password \
                          -o path/deploy/model.etlt \
                          --batch_size 24 \
                          --engine_file path/deploy/model.engine \
                          -e path/resnet18_inference.cfg \
                          --data_type fp16

any update? @yingliu @yuweiw

1.You have to determine the type of your model, detector or classifier.
2.Could you get the output from our demo code whitout any change?
3.Also you can refer the link below for your similar questions:

I have mentioned in the sgie that its a classifier,
The detector runs fine, but when I run the custom sgie its fails,
I have gone throught the issue you mentioned already, but it is more of detector and a detector, not a classifier and cross checked by placing the value network-type = 1
in sgie file.

@yuweiw @yingliu

Could you try to set the paras below in your config file:

## 0=Detector, 1=Classifier, 2=Segmentation, 100=Other
network-type=100
# Enable tensor metadata output
output-tensor-meta=1

I am still having the same error @yingliu @yuweiw