Classifier_meta_data is none for ONNX model as input

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.1
• TensorRT Version 8.2
• NVIDIA GPU Driver Version (valid for GPU only) 510.73.05, cuda: 11.6

I have trained a custom pytorch model on 17 classes:
colab link

I am using deepstream test app2 and I am using a detector and a classifier only, the classifier is not working fine, as I can see it the classifier_meta_data is always none

My SGIE:

[property]
gpu-id=0
net-scale-factor=0.0174292
offsets=123.675;116.28;103.53
onnx-file=model/resnet18.onnx
labelfile-path=model/labels.txt


batch-size=1
model-color-format=0
process-mode=2

## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2


is-classifier=1
num-detected-classes=17
interval=0

gie-unique-id=2

model-engine-file=pose_estimation.onnx_b1_gpu0_fp16.engine

network-type=100
workspace-size=3000

operate-on-gie-id=1      #use the classifier if the box comes the pgie id 1
operate-on-class-ids=0   #detect the classification for cars

maintain-aspect-ratio=1
classifier-async-mode=1
classifier-threshold=0.01

Python Script:

#!/usr/bin/env python3

################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2019-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################

import sys
sys.path.append('../')
import platform
import configparser
import os
import gi
gi.require_version('Gst', '1.0')
from gi.repository import GLib, Gst
from common.is_aarch_64 import is_aarch64
from common.bus_call import bus_call

# location = os.getcwd() + "/src/ros2_deepstream/config_files/"

class_obj = 'Car;Bicycle;Person;Roadsign'.split(';')

# class_color = 'black;blue;brown;gold;green;grey;maroon;orange;red;silver;white;yellow'.split(';')

# class_make = 'acura;audi;bmw;chevrolet;chrysler;dodge;ford;gmc;honda;hyundai;infiniti;jeep;kia;lexus;mazda;mercedes;nissan;subaru;toyota;volkswagen'.split(';')

# class_type = 'coupe;largevehicle;sedan;suv;truck;van'.split(';')

class_type = 'Ambulance;Barge;Bicycle;Boat;Bus;Car;Cart;Caterpillar;Helicopter;Limousine;Motorcycle;Segway;Snowmobile;Tank;Taxi;Truck;Van'.split(';')

import pyds

PGIE_CLASS_ID_VEHICLE = 0
PGIE_CLASS_ID_BICYCLE = 1
PGIE_CLASS_ID_PERSON = 2
PGIE_CLASS_ID_ROADSIGN = 3
past_tracking_meta=[0]

def osd_sink_pad_buffer_probe(pad,info,u_data):
    frame_number=0
    #Intiallizing object counter with 0.
    obj_counter = {
        PGIE_CLASS_ID_VEHICLE:0,
        PGIE_CLASS_ID_PERSON:0,
        PGIE_CLASS_ID_BICYCLE:0,
        PGIE_CLASS_ID_ROADSIGN:0
    }
    num_rects=0
    gst_buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer ")
        return

    # Retrieve batch metadata from the gst_buffer
    # Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
    # C address of gst_buffer as input, which is obtained with hash(gst_buffer)
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    l_frame = batch_meta.frame_meta_list
    while l_frame is not None:
        try:
            # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
            # The casting is done by pyds.NvDsFrameMeta.cast()
            # The casting also keeps ownership of the underlying memory
            # in the C code, so the Python garbage collector will leave
            # it alone.
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break

        frame_number=frame_meta.frame_num
        num_rects = frame_meta.num_obj_meta
        l_obj=frame_meta.obj_meta_list

        while l_obj is not None:
            try:

                # Casting l_obj.data to pyds.NvDsObjectMeta
                obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
                l_classifier = obj_meta.classifier_meta_list
                # print(l_classifier)

            # If object is a car (class ID 0), perform attribute classification
                if obj_meta.class_id == 0 and l_classifier is not None:
                    # Creating and publishing message with output of classification inference
                    # msg2 = Classification2D()

                    while l_classifier is not None:
                        # result = ObjectHypothesis()
                        try:
                            classifier_meta = pyds.glist_get_nvds_classifier_meta(l_classifier.data)
                            
                        except StopIteration:
                            print('Could not parse MetaData: ')
                            break

                        classifier_id = classifier_meta.unique_component_id
                        l_label = classifier_meta.label_info_list
                        label_info = pyds.glist_get_nvds_label_info(l_label.data)
                        classifier_class = label_info.result_class_id

                        # print("Classifier ID: ", classifier_id)
                        # print("Classifier Class: ", classifier_class)

                        # if classifier_id == 2: print('colour :', class_color[classifier_class])

                        if classifier_id == 2: print('Type --  :', class_color[classifier_class])
                        elif classifier_id == 3: print('maker:', class_make[classifier_class])
                        else: print('type:', class_type[classifier_class])



                        # if classifier_id == 2:
                        #     result.id = class_color[classifier_class]
                        # elif classifier_id == 3:
                        #     result.id = class_make[classifier_class]
                        # else:
                        #     result.id = class_type[classifier_class]

                        # result.score = label_info.result_prob                            
                        # msg2.results.append(result)
                        l_classifier = l_classifier.next
                
                        # print('the result is ', result)
                    # self.publisher_classification.publish(msg2)

            except StopIteration:
                break
            obj_counter[obj_meta.class_id] += 1
            try: 
                l_obj=l_obj.next
            except StopIteration:
                break

        # Acquiring a display meta object. The memory ownership remains in
        # the C code so downstream plugins can still access it. Otherwise
        # the garbage collector will claim it when this probe function exits.
        display_meta=pyds.nvds_acquire_display_meta_from_pool(batch_meta)
        display_meta.num_labels = 1
        py_nvosd_text_params = display_meta.text_params[0]
        # Setting display text to be shown on screen
        # Note that the pyds module allocates a buffer for the string, and the
        # memory will not be claimed by the garbage collector.
        # Reading the display_text field here will return the C address of the
        # allocated string. Use pyds.get_string() to get the string content.
        py_nvosd_text_params.display_text = "Frame Number={} Number of Objects={} Vehicle_count={} Person_count={}".format(frame_number, num_rects, obj_counter[PGIE_CLASS_ID_VEHICLE], obj_counter[PGIE_CLASS_ID_PERSON])

        # Now set the offsets where the string should appear
        py_nvosd_text_params.x_offset = 10
        py_nvosd_text_params.y_offset = 12

        # Font , font-color and font-size
        py_nvosd_text_params.font_params.font_name = "Serif"
        py_nvosd_text_params.font_params.font_size = 10
        # set(red, green, blue, alpha); set to White
        py_nvosd_text_params.font_params.font_color.set(1.0, 1.0, 1.0, 1.0)

        # Text background color
        py_nvosd_text_params.set_bg_clr = 1
        # set(red, green, blue, alpha); set to Black
        py_nvosd_text_params.text_bg_clr.set(0.0, 0.0, 0.0, 1.0)
        # Using pyds.get_string() to get display_text as string
        print(pyds.get_string(py_nvosd_text_params.display_text))
        pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)
        try:
            l_frame=l_frame.next
        except StopIteration:
            break
    #past traking meta data
    if(past_tracking_meta[0]==1):
        l_user=batch_meta.batch_user_meta_list
        while l_user is not None:
            try:
                # Note that l_user.data needs a cast to pyds.NvDsUserMeta
                # The casting is done by pyds.NvDsUserMeta.cast()
                # The casting also keeps ownership of the underlying memory
                # in the C code, so the Python garbage collector will leave
                # it alone
                user_meta=pyds.NvDsUserMeta.cast(l_user.data)
            except StopIteration:
                break
            if(user_meta and user_meta.base_meta.meta_type==pyds.NvDsMetaType.NVDS_TRACKER_PAST_FRAME_META):
                try:
                    # Note that user_meta.user_meta_data needs a cast to pyds.NvDsPastFrameObjBatch
                    # The casting is done by pyds.NvDsPastFrameObjBatch.cast()
                    # The casting also keeps ownership of the underlying memory
                    # in the C code, so the Python garbage collector will leave
                    # it alone
                    pPastFrameObjBatch = pyds.NvDsPastFrameObjBatch.cast(user_meta.user_meta_data)
                except StopIteration:
                    break
                for trackobj in pyds.NvDsPastFrameObjBatch.list(pPastFrameObjBatch):
                    print("streamId=",trackobj.streamID)
                    print("surfaceStreamID=",trackobj.surfaceStreamID)
                    for pastframeobj in pyds.NvDsPastFrameObjStream.list(trackobj):
                        print("numobj=",pastframeobj.numObj)
                        print("uniqueId=",pastframeobj.uniqueId)
                        print("classId=",pastframeobj.classId)
                        print("objLabel=",pastframeobj.objLabel)
                        for objlist in pyds.NvDsPastFrameObjList.list(pastframeobj):
                            print('frameNum:', objlist.frameNum)
                            print('tBbox.left:', objlist.tBbox.left)
                            print('tBbox.width:', objlist.tBbox.width)
                            print('tBbox.top:', objlist.tBbox.top)
                            print('tBbox.right:', objlist.tBbox.height)
                            print('confidence:', objlist.confidence)
                            print('age:', objlist.age)
            try:
                l_user=l_user.next
            except StopIteration:
                break
    return Gst.PadProbeReturn.OK	

def main(args):
    # Check input arguments
    if(len(args)<2):
        sys.stderr.write("usage: %s <h264_elementary_stream> [0/1]\n" % args[0])
        sys.exit(1)

    # Standard GStreamer initialization
    if(len(args)==3):
        past_tracking_meta[0]=int(args[2])
    Gst.init(None)

    # Create gstreamer elements
    # Create Pipeline element that will form a connection of other elements
    print("Creating Pipeline \n ")
    pipeline = Gst.Pipeline()

    if not pipeline:
        sys.stderr.write(" Unable to create Pipeline \n")

    # Source element for reading from the file
    print("Creating Source \n ")
    source = Gst.ElementFactory.make("filesrc", "file-source")
    if not source:
        sys.stderr.write(" Unable to create Source \n")

    # Since the data format in the input file is elementary h264 stream,
    # we need a h264parser
    print("Creating H264Parser \n")
    h264parser = Gst.ElementFactory.make("h264parse", "h264-parser")
    if not h264parser:
        sys.stderr.write(" Unable to create h264 parser \n")

    # Use nvdec_h264 for hardware accelerated decode on GPU
    print("Creating Decoder \n")
    decoder = Gst.ElementFactory.make("nvv4l2decoder", "nvv4l2-decoder")
    if not decoder:
        sys.stderr.write(" Unable to create Nvv4l2 Decoder \n")

    # Create nvstreammux instance to form batches from one or more sources.
    streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
    if not streammux:
        sys.stderr.write(" Unable to create NvStreamMux \n")

    # Use nvinfer to run inferencing on decoder's output,
    # behaviour of inferencing is set through config file
    pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
    if not pgie:
        sys.stderr.write(" Unable to create pgie \n")

    tracker = Gst.ElementFactory.make("nvtracker", "tracker")
    if not tracker:
        sys.stderr.write(" Unable to create tracker \n")

    sgie1 = Gst.ElementFactory.make("nvinfer", "secondary1-nvinference-engine")
    if not sgie1:
        sys.stderr.write(" Unable to make sgie1 \n")


    sgie2 = Gst.ElementFactory.make("nvinfer", "secondary2-nvinference-engine")
    if not sgie2:
        sys.stderr.write(" Unable to make sgie2 \n")

    sgie3 = Gst.ElementFactory.make("nvinfer", "secondary3-nvinference-engine")
    if not sgie3:
        sys.stderr.write(" Unable to make sgie3 \n")

    nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
    if not nvvidconv:
        sys.stderr.write(" Unable to create nvvidconv \n")

    # Create OSD to draw on the converted RGBA buffer
    nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")

    if not nvosd:
        sys.stderr.write(" Unable to create nvosd \n")

    # Finally render the osd output
    if is_aarch64():
        transform = Gst.ElementFactory.make("nvegltransform", "nvegl-transform")

    print("Creating EGLSink \n")
    sink = Gst.ElementFactory.make("fakesink", "fakesink")
    if not sink:
        sys.stderr.write(" Unable to create egl sink \n")

    print("Playing file %s " %args[1])
    source.set_property('location', args[1])
    streammux.set_property('width', 1920)
    streammux.set_property('height', 1080)
    streammux.set_property('batch-size', 1)
    streammux.set_property('batched-push-timeout', 4000000)

    #Set properties of pgie and sgie
    pgie.set_property('config-file-path', "dstest2_pgie_config.txt")
    # sgie1.set_property('config-file-path', "dstest2_sgie1_config.txt")
    # sgie2.set_property('config-file-path', "dstest2_sgie2_config.txt")
    # sgie3.set_property('config-file-path', "dstest2_sgie3_config.txt")

    sgie1.set_property('config-file-path', "customsgie.txt")

    #Set properties of tracker
    config = configparser.ConfigParser()
    config.read('dstest2_tracker_config.txt')
    config.sections()

    for key in config['tracker']:
        if key == 'tracker-width' :
            tracker_width = config.getint('tracker', key)
            tracker.set_property('tracker-width', tracker_width)
        if key == 'tracker-height' :
            tracker_height = config.getint('tracker', key)
            tracker.set_property('tracker-height', tracker_height)
        if key == 'gpu-id' :
            tracker_gpu_id = config.getint('tracker', key)
            tracker.set_property('gpu_id', tracker_gpu_id)
        if key == 'll-lib-file' :
            tracker_ll_lib_file = config.get('tracker', key)
            tracker.set_property('ll-lib-file', tracker_ll_lib_file)
        if key == 'll-config-file' :
            tracker_ll_config_file = config.get('tracker', key)
            tracker.set_property('ll-config-file', tracker_ll_config_file)
        if key == 'enable-batch-process' :
            tracker_enable_batch_process = config.getint('tracker', key)
            tracker.set_property('enable_batch_process', tracker_enable_batch_process)
        if key == 'enable-past-frame' :
            tracker_enable_past_frame = config.getint('tracker', key)
            tracker.set_property('enable_past_frame', tracker_enable_past_frame)

    print("Adding elements to Pipeline \n")
    pipeline.add(source)
    pipeline.add(h264parser)
    pipeline.add(decoder)
    pipeline.add(streammux)
    pipeline.add(pgie)
    pipeline.add(tracker)
    pipeline.add(sgie1)
    # pipeline.add(sgie2)
    # pipeline.add(sgie3)
    pipeline.add(nvvidconv)
    pipeline.add(nvosd)
    pipeline.add(sink)
    if is_aarch64():
        pipeline.add(transform)

    # we link the elements together
    # file-source -> h264-parser -> nvh264-decoder ->
    # nvinfer -> nvvidconv -> nvosd -> video-renderer
    print("Linking elements in the Pipeline \n")
    source.link(h264parser)
    h264parser.link(decoder)

    sinkpad = streammux.get_request_pad("sink_0")
    if not sinkpad:
        sys.stderr.write(" Unable to get the sink pad of streammux \n")
    srcpad = decoder.get_static_pad("src")
    if not srcpad:
        sys.stderr.write(" Unable to get source pad of decoder \n")
    srcpad.link(sinkpad)
    streammux.link(pgie)
    pgie.link(tracker)
    tracker.link(sgie1)
    # sgie1.link(sgie2)
    # sgie2.link(sgie3)
    # sgie3.link(nvvidconv)
    sgie1.link(nvvidconv)
    nvvidconv.link(nvosd)
    if is_aarch64():
        nvosd.link(transform)
        transform.link(sink)
    else:
        nvosd.link(sink)


    # create and event loop and feed gstreamer bus mesages to it
    loop = GLib.MainLoop()

    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect ("message", bus_call, loop)

    # Lets add probe to get informed of the meta data generated, we add probe to
    # the sink pad of the osd element, since by that time, the buffer would have
    # had got all the metadata.
    osdsinkpad = nvosd.get_static_pad("sink")
    if not osdsinkpad:
        sys.stderr.write(" Unable to get sink pad of nvosd \n")
    osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)


    print("Starting pipeline \n")
    
    # start play back and listed to events
    pipeline.set_state(Gst.State.PLAYING)
    try:
      loop.run()
    except:
      pass

    # cleanup
    pipeline.set_state(Gst.State.NULL)

if __name__ == '__main__':
    sys.exit(main(sys.argv))


The ONNX file I made:
resnet18.onnx (21.3 MB)

labels.txt:
Ambulance;Barge;Bicycle;Boat;Bus;Car;Cart;Caterpillar;Helicopter;Limousine;Motorcycle;Segway;Snowmobile;Tank;Taxi;Truck;Van

@yuweiw can you tell me where I am doing wrong and what parameter I have to update if necessary?

Hi @kuppasaisriteja , I did the following comparative tests. According to the results, there may be a problem with your model

1.modify your Python Script to use our demo model, the classifier_meta_data is not None.

sgie1.set_property('config-file-path', "dstest2_sgie1_config.txt")

2.modify the dstest2_sgie1_config.txt to your model, the classifier_meta_data is None
dstest2_sgie1_config.txt:

[property]
gpu-id=0
net-scale-factor=1
#model-file=../../../../samples/models/Secondary_CarColor/resnet18.caffemodel
#proto-file=../../../../samples/models/Secondary_CarColor/resnet18.prototxt
#model-engine-file=../../../../samples/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine
model-engine-file=./resnet18.onnx_b1_gpu0_fp16.engine
#mean-file=../../../../samples/models/Secondary_CarColor/mean.ppm
#labelfile-path=../../../../samples/models/Secondary_CarColor/labels.txt
labelfile-path=./labels.txt
#int8-calib-file=../../../../samples/models/Secondary_CarColor/cal_trt.bin
#force-implicit-batch-dim=1
batch-size=1
# 0=FP32 and 1=INT8 mode
network-mode=1
input-object-min-width=64
input-object-min-height=64
process-mode=2
model-color-format=1
gpu-id=0
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=0
is-classifier=1
#output-blob-names=predictions/Softmax
classifier-async-mode=1
classifier-threshold=0.51
process-mode=2
#scaling-filter=0
#scaling-compute-hw=0

lables.txt

Ambulance
Barge
Bicycle
Boat
Bus
Car
Cart
Caterpillar
Helicopter
Limousine
Motorcycle
Segway
Snowmobile
Tank
Taxi
Truck
Van

Okay @yuweiw thanks for responding, I have even attached the model prepration steps, can you tell me what is wrong with the model, is it accuracy, or the process of training the model, conversion to onnx?

As the config file I attached, you should change this

#model-engine-file=pose_estimation.onnx_b1_gpu0_fp16.engine
model-engine-file=resnet18.onnx_b1_gpu0_fp16.engine

Generally speaking, our model output is the data processed by softmax. But the output of your model is just17 numbers. Are these the probability of each label?

yes, there are 17 classes, so it would be that, but to my knowledge I rememeber I added a new layer softmax to my model, but anyways I can get 2 things from you,

  1. How to do you retrieve the probability values in deepstream python app?
  2. How to add a softmax to the last layer, I have done this and made the model, you can check in my colab notebook, if not that way can you tell me how to do it?

@yuweiw

Thank you
K Sai Sri Teja

@yuweiw,

the model works and outputs when the I change the sgie parameter,
network-type=100
to
network-type=1

but the outputs are not good, is it a correct way or whats happening inside the deepstream,?

You can learn how to use the nvinfer simply from the link below, like what is the meaning of each parameter and so on.
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html

network-type:
0: Detector
1: Classifier
2: Segmentation
3: Instance Segmentation

I am not sure if its necessary to have a softmax at end the of the model or not, if yes why would you do that? and how will I get the probabilities of all of all the classes for a certain image i.e output of the model
@yuweiw

can you tell how to get the probability values from deepstream

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

If you use your own models, you can try to use postprocess plugin to get the output of your model.
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvdspostprocess.html
Also you can refer the link below to see how to implement it.
https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps/tree/master/post_processor

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.