Classifier_meta_list is none in deepstream_test2.py

I have sorted the issue initially, but again, I am facing none as an output for a certain frames while running the python app, it behaves properly with another set of frame and gives a valid output, can I know why the value is none for obj_meta_list at a instance even when the detector draws a box around vehicle.

Usually when an images is passed through a neural net in pytorch or tf, no matter what we will get a specific set of probabilities, but never None, I am not sure what’s happening in the background, can someone elaborate it. @yingliu @yuweiw

Change to Deepstream forum.

Basically, when you set the output-tensor-meta = 1 para, you can get the output tensor from the probe function. Could you just run one of our demo and attach your problem? So we can debug it more conveniently.
https://github.com/NVIDIA-AI-IOT/deepstream_python_apps

I am using deepstream_test2.py the model I used is yolov5 from the github repo() and a custom trained classifier using Nvidia Tao on 5 labels.

python script.py

#!/usr/bin/env python3

################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2019-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################

import sys
sys.path.append('../')
import platform
import configparser

import os
no_display = True

print(os.getcwd())

import gi
gi.require_version('Gst', '1.0')
from gi.repository import GLib, Gst
from common.is_aarch_64 import is_aarch64
from common.bus_call import bus_call

import pyds

PGIE_CLASS_ID_VEHICLE = 0
PGIE_CLASS_ID_BICYCLE = 1
PGIE_CLASS_ID_PERSON = 2
PGIE_CLASS_ID_ROADSIGN = 3
past_tracking_meta=[0]

def osd_sink_pad_buffer_probe(pad,info,u_data):
    frame_number=0
    #Intiallizing object counter with 0.
    obj_counter = {
        PGIE_CLASS_ID_VEHICLE:0,
        PGIE_CLASS_ID_PERSON:0,
        PGIE_CLASS_ID_BICYCLE:0,
        PGIE_CLASS_ID_ROADSIGN:0
    }
    num_rects=0

    gst_buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer ")
        return

    # Retrieve batch metadata from the gst_buffer
    # Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
    # C address of gst_buffer as input, which is obtained with hash(gst_buffer)
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    l_frame = batch_meta.frame_meta_list
    while l_frame is not None:
        try:

            # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
            # The casting is done by pyds.glist_get_nvds_frame_meta()
            # The casting also keeps ownership of the underlying memory
            # in the C code, so the Python garbage collector will leave
            # it alone.
            #frame_meta = pyds.glist_get_nvds_frame_meta(l_frame.data)
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break

        frame_number=frame_meta.frame_num
        num_rects = frame_meta.num_obj_meta
        l_obj=frame_meta.obj_meta_list

        print(l_obj)

        while l_obj is not None:
            try:
                # Casting l_obj.data to pyds.NvDsObjectMeta
                #obj_meta=pyds.glist_get_nvds_object_meta(l_obj.data)
                obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
                # print(obj_meta.detector_bbox_info)


                if (obj_meta.class_id == PGIE_CLASS_ID_VEHICLE):

                    cls_obj = obj_meta.classifier_meta_list
                    print('cls obj is',cls_obj)

                    while cls_obj is not None:
                        print('The cls is not none')
                        try:

                            cls_meta=pyds.NvDsClassifierMeta.cast(cls_obj.data)
                            print('the labels info', cls_meta.label_info_list)
                            print('component id',cls_meta.unique_component_id)

                            if cls_meta.unique_component_id==2:
                                
                                print()
                                cls_meta_lbl = cls_meta.label_info_list

                                while cls_meta_lbl is not None:
                                    try:
                                        cls_meta_lbl_info=pyds.NvDsLabelInfo.cast(cls_meta_lbl.data)
                                        result_str = str(cls_meta_lbl_info.result_label)#.tobytes().decode('iso-8859-1')) 
                                        print("result_decode", result_str)
                                        print("result_strip:", result_str.split('\x00'))
                                        print("result_one:", result_str.split('\x00')[0])
                                        print("-----------------------------------------")
                                        print("result_label:", cls_meta_lbl_info.result_label)

                                    except StopIteration:
                                        break
                        except StopIteration:
                            break
            except StopIteration:
                break

            if obj_meta.class_id in obj_counter:
                pass

            else:
                obj_counter[obj_meta.class_id] = 0

            obj_counter[obj_meta.class_id] += 1
            obj_meta.rect_params.border_color.set(0.0, 0.0, 1.0, 0.0)
            try: 
                l_obj=l_obj.next
            except StopIteration:
                break

        # Acquiring a display meta object. The memory ownership remains in
        # the C code so downstream plugins can still access it. Otherwise
        # the garbage collector will claim it when this probe function exits.
        display_meta=pyds.nvds_acquire_display_meta_from_pool(batch_meta)
        display_meta.num_labels = 1
        py_nvosd_text_params = display_meta.text_params[0]
        # Setting display text to be shown on screen
        # Note that the pyds module allocates a buffer for the string, and the
        # memory will not be claimed by the garbage collector.
        # Reading the display_text field here will return the C address of the
        # allocated string. Use pyds.get_string() to get the string content.
        py_nvosd_text_params.display_text = "Frame Number={} Number of Objects={} Vehicle_count={} Person_count={}".format(frame_number, num_rects, obj_counter[PGIE_CLASS_ID_VEHICLE], obj_counter[PGIE_CLASS_ID_PERSON])

        # Now set the offsets where the string should appear
        py_nvosd_text_params.x_offset = 10
        py_nvosd_text_params.y_offset = 12

        # Font , font-color and font-size
        py_nvosd_text_params.font_params.font_name = "Serif"
        py_nvosd_text_params.font_params.font_size = 10
        # set(red, green, blue, alpha); set to White
        py_nvosd_text_params.font_params.font_color.set(1.0, 1.0, 1.0, 1.0)

        # Text background color
        py_nvosd_text_params.set_bg_clr = 1
        # set(red, green, blue, alpha); set to Black
        py_nvosd_text_params.text_bg_clr.set(0.0, 0.0, 0.0, 1.0)
        # Using pyds.get_string() to get display_text as string
        print(pyds.get_string(py_nvosd_text_params.display_text))
        pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)

        try:
            l_frame=l_frame.next
        except StopIteration:
            break
			
    return Gst.PadProbeReturn.OK	

def main(args):
    # Check input arguments
    if(len(args)<2):
        sys.stderr.write("usage: %s <h264_elementary_stream> [0/1]\n" % args[0])
        sys.exit(1)

    # Standard GStreamer initialization
    if(len(args)==3):
        past_tracking_meta[0]=int(args[2])
    Gst.init(None)

    # Create gstreamer elements
    # Create Pipeline element that will form a connection of other elements
    print("Creating Pipeline \n ")
    pipeline = Gst.Pipeline()

    if not pipeline:
        sys.stderr.write(" Unable to create Pipeline \n")

    # Source element for reading from the file
    print("Creating Source \n ")
    source = Gst.ElementFactory.make("filesrc", "file-source")
    if not source:
        sys.stderr.write(" Unable to create Source \n")

    # Since the data format in the input file is elementary h264 stream,
    # we need a h264parser
    print("Creating H264Parser \n")
    h264parser = Gst.ElementFactory.make("h264parse", "h264-parser")
    if not h264parser:
        sys.stderr.write(" Unable to create h264 parser \n")

    # Use nvdec_h264 for hardware accelerated decode on GPU
    print("Creating Decoder \n")
    decoder = Gst.ElementFactory.make("nvv4l2decoder", "nvv4l2-decoder")
    if not decoder:
        sys.stderr.write(" Unable to create Nvv4l2 Decoder \n")

    # Create nvstreammux instance to form batches from one or more sources.
    streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
    if not streammux:
        sys.stderr.write(" Unable to create NvStreamMux \n")

    # Use nvinfer to run inferencing on decoder's output,
    # behaviour of inferencing is set through config file
    pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
    if not pgie:
        sys.stderr.write(" Unable to create pgie \n")

    tracker = Gst.ElementFactory.make("nvtracker", "tracker")
    if not tracker:
        sys.stderr.write(" Unable to create tracker \n")

    sgie1 = Gst.ElementFactory.make("nvinfer", "secondary1-nvinference-engine")
    if not sgie1:
        sys.stderr.write(" Unable to make sgie1 \n")


    # sgie2 = Gst.ElementFactory.make("nvinfer", "secondary2-nvinference-engine")
    # if not sgie2:
    #     sys.stderr.write(" Unable to make sgie2 \n")

    # sgie3 = Gst.ElementFactory.make("nvinfer", "secondary3-nvinference-engine")
    # if not sgie3:
    #     sys.stderr.write(" Unable to make sgie3 \n")

    nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
    if not nvvidconv:
        sys.stderr.write(" Unable to create nvvidconv \n")

    # Create OSD to draw on the converted RGBA buffer
    nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")

    if not nvosd:
        sys.stderr.write(" Unable to create nvosd \n")

    if no_display:
        print("Creating Fakesink \n")
        sink = Gst.ElementFactory.make("fakesink", "fakesink")
        sink.set_property('enable-last-sample', 0)
        sink.set_property('sync', 0)
    else:
        if(is_aarch64()):
            print("Creating transform \n ")
            transform=Gst.ElementFactory.make("nvegltransform", "nvegl-transform")
            if not transform:
                sys.stderr.write(" Unable to create transform \n")
        print("Creating EGLSink \n")
        sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")    
    
    if not sink:
        sys.stderr.write(" Unable to create egl sink \n")

    print("Playing file %s " %args[1])
    source.set_property('location', args[1])
    streammux.set_property('width', 1920)
    streammux.set_property('height', 1080)
    streammux.set_property('batch-size', 1)
    streammux.set_property('batched-push-timeout', 4000000)

    #Set properties of pgie and sgie
    # pgie.set_property('config-file-path', "dstest2_pgie_config.txt")
    pgie.set_property('config-file-path', "primary_pgie_config_yolo.txt")
    sgie1.set_property('config-file-path', "sgie1_custom.txt")

    #sgie2.set_property('config-file-path', "dstest2_sgie2_config.txt")
    #sgie3.set_property('config-file-path', "dstest2_sgie3_config.txt")

    #Set properties of tracker
    config = configparser.ConfigParser()
    config.read('dstest2_tracker_config.txt')
    config.sections()

    print('config', config.keys())

    for key in config['tracker']:
        if key == 'tracker-width' :
            tracker_width = config.getint('tracker', key)
            tracker.set_property('tracker-width', tracker_width)
        if key == 'tracker-height' :
            tracker_height = config.getint('tracker', key)
            tracker.set_property('tracker-height', tracker_height)
        if key == 'gpu-id' :
            tracker_gpu_id = config.getint('tracker', key)
            tracker.set_property('gpu_id', tracker_gpu_id)
        if key == 'll-lib-file' :
            tracker_ll_lib_file = config.get('tracker', key)
            tracker.set_property('ll-lib-file', tracker_ll_lib_file)
        if key == 'll-config-file' :
            tracker_ll_config_file = config.get('tracker', key)
            tracker.set_property('ll-config-file', tracker_ll_config_file)
        if key == 'enable-batch-process' :
            tracker_enable_batch_process = config.getint('tracker', key)
            tracker.set_property('enable_batch_process', tracker_enable_batch_process)
        if key == 'enable-past-frame' :
            tracker_enable_past_frame = config.getint('tracker', key)
            tracker.set_property('enable_past_frame', tracker_enable_past_frame)

    print("Adding elements to Pipeline \n")
    pipeline.add(source)
    pipeline.add(h264parser)
    pipeline.add(decoder)
    pipeline.add(streammux)
    pipeline.add(pgie)
    pipeline.add(tracker)
    pipeline.add(sgie1)
    #pipeline.add(sgie2)
    #pipeline.add(sgie3)
    pipeline.add(nvvidconv)
    pipeline.add(nvosd)
    pipeline.add(sink)
    if is_aarch64():
        pipeline.add(transform)

    # we link the elements together
    # file-source -> h264-parser -> nvh264-decoder ->
    # nvinfer -> nvvidconv -> nvosd -> video-renderer
    print("Linking elements in the Pipeline \n")
    source.link(h264parser)
    h264parser.link(decoder)

    sinkpad = streammux.get_request_pad("sink_0")
    if not sinkpad:
        sys.stderr.write(" Unable to get the sink pad of streammux \n")
    srcpad = decoder.get_static_pad("src")
    if not srcpad:
        sys.stderr.write(" Unable to get source pad of decoder \n")
    srcpad.link(sinkpad)
    streammux.link(pgie)
    pgie.link(tracker)
    tracker.link(sgie1)
    sgie1.link(nvvidconv)
    #sgie2.link(sgie3)
    #sgie3.link(nvvidconv)
    nvvidconv.link(nvosd)
    if is_aarch64():
        nvosd.link(transform)
        transform.link(sink)
    else:
        nvosd.link(sink)


    # create and event loop and feed gstreamer bus mesages to it
    loop = GLib.MainLoop()

    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect ("message", bus_call, loop)

    # Lets add probe to get informed of the meta data generated, we add probe to
    # the sink pad of the osd element, since by that time, the buffer would have
    # had got all the metadata.
    osdsinkpad = nvosd.get_static_pad("sink")
    if not osdsinkpad:
        sys.stderr.write(" Unable to get sink pad of nvosd \n")
    osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)


    print("Starting pipeline \n")
    
    # start play back and listed to events
    pipeline.set_state(Gst.State.PLAYING)
    try:
      loop.run()
    except:
      pass

    # cleanup
    pipeline.set_state(Gst.State.NULL)

if __name__ == '__main__':
    sys.exit(main(sys.argv))

I am using one pgie(detector) and sgie(classifier)

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
custom-network-config=/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test2-new/model_detection_yolov5m/DeepStream-Yolo/Vehicle-CCTV_v1.0_yolov5m_acc96.cfg
model-file=/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test2-new/model_detection_yolov5m/DeepStream-Yolo/Vehicle-CCTV_v1.0_yolov5m_acc96.wts
model-engine-file=model_b1_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test2-new/model_detection_yolov5m/labels.txt
batch-size=1
network-mode=0
num-detected-classes=1
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test2-new/model_detection_yolov5m/DeepStream-Yolo/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
nms-iou-threshold=0.10
pre-cluster-threshold=0.25
topk=300

SGIE

[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
#model-engine-file=/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test2-new1/deploy/model.engine

tlt-encoded-model=/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test2-new1/deploy/model.etlt
tlt-model-key=password
labelfile-path=/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test2-new1/deploy/labels.txt
int8-calib-file=/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test2-new1/deploy/calib.txt
force-implicit-batch-dim=1
uff-input-blob-name=input_1
output-blob-names=predictions/Softmax
infer-dims=3;224;224
batch-size=64
network-mode=2
uff-input-order=0
input-object-min-width=64
input-object-min-height=64
network-type = 1
model-color-format=1
num-detected-classes=5
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=0
is-classifier=1
classifier-async-mode=1
classifier-threshold=0.01
maintain-aspect-ratio=0
output-tensor-meta=0
process-mode=2
#scaling-filter=0
#scaling-compute-hw=0

In almost all the frames:
the classifier_meta_list is none, but the detector is giving a good box around the image. I am not sure why classifier is giving this result. I am not sure if the classifier is not performing good.

@yuweiw @yingliu

I have tried our demo code deepstream_test2.py with your classifier_meta_list print code, it is not always none. So you can try to use our demo model to test it. Maybe your own models have some problems.
Besides, there is something wrong with the logic of your code below. The following code is an endless loop.

                if (obj_meta.class_id == PGIE_CLASS_ID_VEHICLE):

                    cls_obj = obj_meta.classifier_meta_list
                    print('cls obj is',cls_obj)

                    while cls_obj is not None:
                        print('The cls is not none')
                        try:

                            cls_meta=pyds.NvDsClassifierMeta.cast(cls_obj.data)
                            print('the labels info', cls_meta.label_info_list)
                            print('component id',cls_meta.unique_component_id)

                            if cls_meta.unique_component_id==2:
                                
                                print()
                                cls_meta_lbl = cls_meta.label_info_list

                                while cls_meta_lbl is not None:
                                    try:
                                        cls_meta_lbl_info=pyds.NvDsLabelInfo.cast(cls_meta_lbl.data)
                                        result_str = str(cls_meta_lbl_info.result_label)#.tobytes().decode('iso-8859-1')) 
                                        print("result_decode", result_str)
                                        print("result_strip:", result_str.split('\x00'))
                                        print("result_one:", result_str.split('\x00')[0])
                                        print("-----------------------------------------")
                                        print("result_label:", cls_meta_lbl_info.result_label)

                                    except StopIteration:
                                        break
                        except StopIteration:
                            break

is there a specific reason why it is none at sometimes and a valid result other times @yuweiw

@yuweiw any idea?

@yuweiw any idea?

There may be some problems below:
1.Your own model may have problems.
2.Your code have some logoic problems, like attached code has endless problems.
You need to debug by yourself about the questions above.

I have fixed the endless loop problem, but still face the None issue, I am not sure how to debug the model training issue, I have used the official nvidia tao doc and training the model, it works fine on single images, but when it comes to object detection if fails, can you tell how to debug errors from model training end?
@yuweiw

@yuweiw any idea?

We’ll talk about the model training issue in your another topic.
https://forums.developer.nvidia.com/t/i-have-a-custom-trained-pytorch-model-in-pt-format-how-can-i-give-this-as-an-input-for-deepstream-python-apps/230116
You can also test it with our own demo and our models by deepstream_test2.py in your env and see if it print well. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.