Deepstream 7.0 object detection issue (Only class_id = 0) is getting detected, other classes are being ignored

Deepstream 7.0 object detection issue (Only class_id = 0) is getting detected, other classes are being ignored
• Hardware Platform (Jetson / GPU) = GPU – Tesla V100 PCIe 16GB
• DeepStream Version = 7.0.0
• JetPack Version (valid for Jetson only = N/A (GPU platform)
• TensorRT Version = 8.6
• NVIDIA GPU Driver Version (valid for GPU only) = 535.161.08
**• Issue Type( questions, new requirements, bugs) = bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
config file = [application]
enable-perf-measurement=1
perf-measurement-interval-sec=5

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
onnx-file=/opt/nvidia/deepstream/deepstream-7.0/sources/deepstream_python_apps/apps/aiis-deepstream-pipeline/modelss/vehicledetection3.onnx
#model-engine-file=/opt/nvidia/deepstream/deepstream-7.0/sources/deepstream_python_apps/apps/aiis-deepstream-pipeline/modelss/vehicledetection3.engine
labelfile-path=/opt/nvidia/deepstream/deepstream-7.0/sources/deepstream_python_apps/apps/aiis-deepstream-pipeline/modelss/vd_labels.txt
batch-size=1 # Increased for better GPU utilization
network-mode=0 # FP16
num-detected-classes=5
interval=0
output-blob-names=detections
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=/opt/nvidia/deepstream/deepstream-7.0/sources/deepstream_python_apps/apps/aiis-deepstream-pipeline/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
nms-iou-threshold=0.2
pre-cluster-threshold=0.15
topk=1500
import sys
sys.path.append(“../”)
from common.bus_call import bus_call
from common.platform_info import PlatformInfo
from common.FPS import PERF_DATA
import pyds
import platform
import math
import time
import os
from ctypes import *
import gi
gi.require_version(“Gst”, “1.0”)
gi.require_version(“GstRtspServer”, “1.0”)
from gi.repository import Gst, GstRtspServer, GLib, GObject
import configparser
import datetime
import argparse
from ctypes import *
import threading

no_display = False
silent = False
file_loop = False
perf_data = None

MAX_DISPLAY_LEN = 64

PGIE_CLASS_ID_PERSON = 0
PGIE_CLASS_ID_BICYCLE = 1
PGIE_CLASS_ID_CAR = 2
PGIE_CLASS_ID_MOTORCYCLE = 3
PGIE_CLASS_ID_BUS = 5

MUXER_OUTPUT_WIDTH = 1920
MUXER_OUTPUT_HEIGHT = 1080
MUXER_BATCH_TIMEOUT_USEC = 33000
TILED_OUTPUT_WIDTH = 1280
TILED_OUTPUT_HEIGHT = 720
GST_CAPS_FEATURES_NVMM = “memory:NVMM”
OSD_PROCESS_MODE = 0
OSD_DISPLAY_TEXT = 0
pgie_classes_str = [“Person”,“Bicycle”, “Car”, “Motorcycle”, “Bus”]

myclient = pymongo.MongoClient(“mongodb://192.1.81.54:27017/”)

mydb = myclient[“smart_security_”]

def osd_sink_pad_buffer_probe(pad, info, u_data):
frame_number=0
num_rects=0

gst_buffer = info.get_buffer()
if not gst_buffer:
    print("Unable to get GstBuffer ")
    return
# Retrieve batch metadata from the gst_buffer
# Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
# C address of gst_buffer as input, which is obtained with hash(gst_buffer)
batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
l_frame = batch_meta.frame_meta_list

while l_frame is not None:
    try:
        # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
        # The casting is done by pyds.NvDsFrameMeta.cast()
        # The casting also keeps ownership of the underlying memory
        # in the C code, so the Python garbage collector will leave
        # it alone.
        frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        print("frame_meta",frame_meta)
    except StopIteration:
        break

    # Initialize object counter with 0 for our target classes
    obj_counter = {
        PGIE_CLASS_ID_PERSON: 0,
        PGIE_CLASS_ID_BICYCLE: 0,
        PGIE_CLASS_ID_CAR: 0,
        PGIE_CLASS_ID_MOTORCYCLE: 0,
        PGIE_CLASS_ID_BUS: 0
    }

    frame_number=frame_meta.frame_num
    num_rects = frame_meta.num_obj_meta
    l_obj=frame_meta.obj_meta_list
    while l_obj is not None:
        try:
            # Casting l_obj.data to pyds.NvDsObjectMeta
            obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
        except StopIteration:
            break
        obj_counter[obj_meta.class_id] += 1
        obj_meta.rect_params.border_color.set(0.0, 0.0, 1.0, 0.8) #0.8 is alpha (opacity)
        try: 
            l_obj=l_obj.next
        except StopIteration:
            break
    
           
     # Acquiring a display meta object. The memory ownership remains in
    # the C code so downstream plugins can still access it. Otherwise
    # the garbage collector will claim it when this probe function exits.
    display_meta=pyds.nvds_acquire_display_meta_from_pool(batch_meta)
    display_meta.num_labels = 1
    py_nvosd_text_params = display_meta.text_params[0]
    # Setting display text to be shown on screen
    # Note that the pyds module allocates a buffer for the string, and the
    # memory will not be claimed by the garbage collector.
    # Reading the display_text field here will return the C address of the
    # allocated string. Use pyds.get_string() to get the string content.
    py_nvosd_text_params.display_text = "Frame Number={} Number of Objects={} Vehicle_count={} Person_count={}".format(frame_number, num_rects, obj_counter[PGIE_CLASS_ID_CAR], obj_counter[PGIE_CLASS_ID_PERSON])

    # Now set the offsets where the string should appear
    py_nvosd_text_params.x_offset = 10
    py_nvosd_text_params.y_offset = 12

    # Font , font-color and font-size
    py_nvosd_text_params.font_params.font_name = "Serif"
    py_nvosd_text_params.font_params.font_size = 10
    # set(red, green, blue, alpha); set to White
    py_nvosd_text_params.font_params.font_color.set(1.0, 1.0, 1.0, 1.0)

    # Text background color
    py_nvosd_text_params.set_bg_clr = 1
    # set(red, green, blue, alpha); set to Black
    py_nvosd_text_params.text_bg_clr.set(0.0, 0.0, 0.0, 1.0)
    # Using pyds.get_string() to get display_text as string
    print(pyds.get_string(py_nvosd_text_params.display_text))
    pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)
    try:
        l_frame=l_frame.next
    except StopIteration:
        break
		
return Gst.PadProbeReturn.OK	

def pgie_src_pad_buffer_probe(pad, info, u_data):
frame_number = 0
gst_buffer = info.get_buffer()
if not gst_buffer:
print(“Unable to get GstBuffer”)
return Gst.PadProbeReturn.OK

# Get batch metadata
batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
l_frame = batch_meta.frame_meta_list

# Debug: Print all detected classes
detected_classes = set()

while l_frame is not None:
    try:
        frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
    except StopIteration:
        break

    frame_number = frame_meta.frame_num
    l_obj = frame_meta.obj_meta_list

    # Initialize object counters
    obj_counter = {
        PGIE_CLASS_ID_PERSON: 0,
        PGIE_CLASS_ID_BICYCLE: 0,
        PGIE_CLASS_ID_CAR: 0,
        PGIE_CLASS_ID_MOTORCYCLE: 0,
        PGIE_CLASS_ID_BUS: 0,
    }
    total_relevant_objects = 0

    while l_obj is not None:
        try:
            obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)
            detected_classes.add(obj_meta.class_id)
            print("detected classes",obj_meta.class_id)
            
            # Debug print for ALL detections
            print(f"[RAW] Frame {frame_number}: {obj_meta.obj_label} (ID:{obj_meta.class_id}) Conf:{obj_meta.confidence:.2f}")

            # Filter for target classes
            if obj_meta.class_id in obj_counter:
                obj_counter[obj_meta.class_id] += 1
                total_relevant_objects += 1
                
                # Enhanced debug for vehicles
                print(f"[VEHICLE] Frame {frame_number}: {obj_meta.obj_label} "
                      f"at ({obj_meta.rect_params.left:.0f},{obj_meta.rect_params.top:.0f}) "
                      f"Size:{obj_meta.rect_params.width:.0f}x{obj_meta.rect_params.height:.0f} "
                      f"Conf:{obj_meta.confidence:.2f}")

        except Exception as e:
            print(f"Error processing obj_meta: {str(e)}")
            break

        try:
            l_obj = l_obj.next
        except StopIteration:
            break

    # Frame summary with improved formatting
    print("\n" + "="*50)
    print(f"Frame {frame_number} Vehicle Report:")
    print(f"  Persons: {obj_counter[PGIE_CLASS_ID_PERSON]}")
    print(f"  Bicycles: {obj_counter[PGIE_CLASS_ID_BICYCLE]}")
    print(f"  Cars: {obj_counter[PGIE_CLASS_ID_CAR]}")
    print(f"  Motorcycles: {obj_counter[PGIE_CLASS_ID_MOTORCYCLE]}")
    print(f"  Buses: {obj_counter[PGIE_CLASS_ID_BUS]}")
    print("="*50 + "\n")

    # Performance monitoring (fixed typo in pad_index)
    stream_index = f"stream{frame_meta.pad_index}"
    if perf_data is not None:
        perf_data.update_fps(stream_index)

    try:
        l_frame = l_frame.next
    except StopIteration:
        break

# Debug: Print all unique classes detected in this batch
if detected_classes:
    print(f"\nAll detected class IDs in this batch: {sorted(detected_classes)}")

return Gst.PadProbeReturn.OK

def cb_newpad(decodebin, decoder_src_pad, data):
“”"
The function is called when a new pad is created by the decodebin.
The function checks if the new pad is for video and not audio.
If the new pad is for video, the function checks if the pad caps contain NVMM memory features.
If the pad caps contain NVMM memory features, the function links the decodebin pad to the source bin
ghost pad.
If the pad caps do not contain NVMM memory features, the function prints an error message.
:param decodebin: The decodebin element that is creating the new pad
:param decoder_src_pad: The source pad created by the decodebin element
:param data: This is the data that was passed to the callback function. In this case, it is the
source_bin
“”"
print(“In cb_newpad\n”)
caps = decoder_src_pad.get_current_caps()
gststruct = caps.get_structure(0)
gstname = gststruct.get_name()
source_bin = data
features = caps.get_features(0)

# Need to check if the pad created by the decodebin is for video and not
# audio.
print("gstname=", gstname)
if gstname.find("video") != -1:
    # Link the decodebin pad only if decodebin has picked nvidia
    # decoder plugin nvdec_*. We do this by checking if the pad caps contain
    # NVMM memory features.
    print("features=", features)
    if features.contains("memory:NVMM"):
        # Get the source bin ghost pad
        bin_ghost_pad = source_bin.get_static_pad("src")
        if not bin_ghost_pad.set_target(decoder_src_pad):
            sys.stderr.write(
                "Failed to link decoder src pad to source bin ghost pad\n"
            )
    else:
        sys.stderr.write(" Error: Decodebin did not pick nvidia decoder plugin.\n")

def decodebin_child_added(child_proxy, Object, name, user_data):
“”"
If the child added to the decodebin is another decodebin, connect to its child-added signal. If the
child added is a source, set its drop-on-latency property to True.

:param child_proxy: The child element that was added to the decodebin
:param Object: The object that emitted the signal
:param name: The name of the element that was added
:param user_data: This is a pointer to the data that you want to pass to the callback function
"""
print("Decodebin child added:", name, "\n")
if name.find("decodebin") != -1:
    Object.connect("child-added", decodebin_child_added, user_data)

if "source" in name:
    source_element = child_proxy.get_by_name("source")
    if source_element.find_property("drop-on-latency") != None:
        Object.set_property("drop-on-latency", True)

def create_source_bin(index, uri):
“”"
It creates a GstBin, adds a uridecodebin to it, and connects the uridecodebin’s pad-added signal to
a callback function

:param index: The index of the source bin
:param uri: The URI of the video file to be played
:return: A bin with a uri decode bin and a ghost pad.
"""
print("Creating source bin")

# Create a source GstBin to abstract this bin's content from the rest of the
# pipeline
bin_name = "source-bin-%02d" % index
print(bin_name)
nbin = Gst.Bin.new(bin_name)
if not nbin:
    sys.stderr.write(" Unable to create source bin \n")

# Source element for reading from the uri.
# We will use decodebin and let it figure out the container format of the
# stream and the codec and plug the appropriate demux and decode plugins.
uri_decode_bin = Gst.ElementFactory.make("uridecodebin", "uri-decode-bin")
if not uri_decode_bin:
    sys.stderr.write(" Unable to create uri decode bin \n")
# We set the input uri to the source element
uri_decode_bin.set_property("uri", uri)
# Connect to the "pad-added" signal of the decodebin which generates a
# callback once a new pad for raw data has beed created by the decodebin
uri_decode_bin.connect("pad-added", cb_newpad, nbin)
uri_decode_bin.connect("child-added", decodebin_child_added, nbin)

# We need to create a ghost pad for the source bin which will act as a proxy
# for the video decoder src pad. The ghost pad will not have a target right
# now. Once the decode bin creates the video decoder and generates the
# cb_newpad callback, we will set the ghost pad target to the video decoder
# src pad.
Gst.Bin.add(nbin, uri_decode_bin)
bin_pad = nbin.add_pad(Gst.GhostPad.new_no_target("src", Gst.PadDirection.SRC))
if not bin_pad:
    sys.stderr.write(" Failed to add ghost pad in source bin \n")
    return None
return nbin

def make_element(element_name, i):
“”"
Creates a Gstreamer element with unique name
Unique name is created by adding element type and index e.g. element_name-i
Unique name is essential for all the element in pipeline otherwise gstreamer will throw exception.
:param element_name: The name of the element to create
:param i: the index of the element in the pipeline
:return: A Gst.Element object
“”"
element = Gst.ElementFactory.make(element_name, element_name)
if not element:
sys.stderr.write(" Unable to create {0}".format(element_name))
element.set_property(“name”, “{0}-{1}”.format(element_name, str(i)))
return element

def setup_rtsp_server(rtsp_port_num):
server = GstRtspServer.RTSPServer.new()
server.props.service = f"{rtsp_port_num}"
server.attach(None)
return server

def create_rtsp_out(rtsp_server, udp_port, stream_path):
vcodec=“H264”
# server = GstRtspServer.RTSPServer.new()
# server.props.service = “%d” % rtsp_port_num
# server.attach(None)
factory = GstRtspServer.RTSPMediaFactory.new()
factory.set_launch(
‘( udpsrc name=pay0 port=%d buffer-size=524288 caps="application/x-rtp, media=video, clock-rate=90000, encoding-name=(string)%s, payload=96 " )’
% (udp_port, vcodec)
)
print(f"UDP PORT for {stream_path}: {udp_port}“)
factory.set_shared(True)
rtsp_server.get_mount_points().add_factory(stream_path, factory)
print(f”\n *** DeepStream: Launched RTSP Streaming at rtsp://localhost:8555{stream_path} ***\n")

def main(args):
number_sources = len(args)
print(f"{number_sources = }")
global perf_data
perf_data = PERF_DATA(number_sources)
rtsp_port = 8555
udp_port = 5400
codec = “H264”
bitrate = 4000000

stream_udpport_mapping = {}

# Standard GStreamer initialization
Gst.init(None)

# Create gstreamer elements */
# Create Pipeline element that will form a connection of other elements
print("Creating Pipeline \n ")
pipeline = Gst.Pipeline()
is_live = False

if not pipeline:
    sys.stderr.write(" Unable to create Pipeline \n")
print("Creating streamux \n ")

# Create nvstreammux instance to form batches from one or more sources.
streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
if not streammux:
    sys.stderr.write(" Unable to create NvStreamMux \n")

pipeline.add(streammux)
for i in range(number_sources):
    print("Creating source_bin ", i, " \n ")
    uri_name = args[i]
    if uri_name.find("rtsp://") == 0:
        is_live = True
    source_bin = create_source_bin(i, uri_name)
    if not source_bin:
        sys.stderr.write("Unable to create source bin \n")
    pipeline.add(source_bin)
    
    padname = f"sink_{i}"
    sinkpad = streammux.request_pad_simple(padname)
    print("padname {}".format(padname))
    if not sinkpad:
        sys.stderr.write("Unable to create sink pad bin \n")
    srcpad = source_bin.get_static_pad("src")
    if not srcpad:
        sys.stderr.write("Unable to create src pad bin \n")
    
    print(f"Linking source_bin[{i}] src pad to streammux sink_{i} pad.")
    srcpad.link(sinkpad)


print("Creating queue1 \n ")
queue1 = Gst.ElementFactory.make("queue", "queue1")
pipeline.add(queue1)

print("Creating Pgie \n ")
pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
if not pgie:
    sys.stderr.write(" Unable to create pgie \n")

print("Creating nvstreamdemux \n ")
nvstreamdemux = Gst.ElementFactory.make("nvstreamdemux", "nvstreamdemux")
if not nvstreamdemux:
    sys.stderr.write(" Unable to create nvstreamdemux \n")

if is_live:
    print("Atleast one of the sources is live")
    streammux.set_property("live-source", 1)

streammux.set_property("width", 1920)
streammux.set_property("height", 1080)
streammux.set_property("batch-size", number_sources)
streammux.set_property("batched-push-timeout", 4000000)
#pgie.set_property("config-file-path", "deepstream-nvdsanalytics/ds_demux_pgie_config.txt")
pgie.set_property("config-file-path", "config/dstest1_pgie1_config_yolox.txt")
pgie_batch_size = pgie.get_property("batch-size")
if pgie_batch_size != number_sources:
    print(
        "WARNING: Overriding infer-config batch-size",
        pgie_batch_size,
        " with number of sources ",
        number_sources,
        " \n",
    )
    pgie.set_property("batch-size", number_sources)

print("Adding elements to Pipeline \n")
pipeline.add(pgie)
pipeline.add(nvstreamdemux)

# linking
streammux.link(queue1)
queue1.link(pgie)
pgie.link(nvstreamdemux)
##creating demux src

for i, uri in enumerate(args):
    uri_stream_path = uri.split("/")[-1]
    # creating queue
    print("i============ {}".format(i))
    queue = make_element("queue", i)
    pipeline.add(queue)

    # creating nvvidconv
    nvvideoconvert = make_element("nvvideoconvert", i)
    pipeline.add(nvvideoconvert)

    # creating nvosd
    nvdsosd = make_element("nvdsosd", i)
    pipeline.add(nvdsosd)
    nvdsosd.set_property("process-mode", OSD_PROCESS_MODE)
    nvdsosd.set_property("display-text", OSD_DISPLAY_TEXT)

    # connect nvstreamdemux -> queue
    padname = "src_%u" % i
    demuxsrcpad = nvstreamdemux.get_request_pad(padname)
    if not demuxsrcpad:
        sys.stderr.write("Unable to create demux src pad \n")

    queuesinkpad = queue.get_static_pad("sink")
    if not queuesinkpad:
        sys.stderr.write("Unable to create queue sink pad \n")
    demuxsrcpad.link(queuesinkpad)

    # connect  queue -> nvvidconv -> nvosd -> nveglgl
    queue.link(nvvideoconvert)
    nvvideoconvert.link(nvdsosd)
    nvvidconv_postosd = Gst.ElementFactory.make("nvvideoconvert", "convertor_postosd"+str(i))
    if not nvvidconv_postosd:
        sys.stderr.write(" Unable to create nvvidconv_postosd \n")

    # Create a caps filter
    caps = Gst.ElementFactory.make("capsfilter", "filter"+str(i))
    if i < 30:
        caps.set_property(
            "caps", Gst.Caps.from_string("video/x-raw(memory:NVMM),format=I420")#format=I420"),width=640,height=480
        )
    else:
        caps.set_property(
            "caps", Gst.Caps.from_string("video/x-raw,format=I420")#format=I420"),width=640,height=480
        )

    # Make the encoder

    encoder = Gst.ElementFactory.make("nvv4l2h264enc", "encoder"+str(i))
    print("Creating HW H264 Encoder")
    # elif codec == "H265":
    #     encoder = Gst.ElementFactory.make("nvv4l2h265enc", "encoder")
    #     print("Creating H265 Encoder")
    # if not encoder:
    #     sys.stderr.write(" Unable to create encoder")
        
    encoder.set_property("bitrate", bitrate)

    # Make the payload-encode video into RTP packets
    if codec == "H264":
        rtppay = Gst.ElementFactory.make("rtph264pay", "rtppay"+str(i))
        print("Creating H264 rtppay")
    elif codec == "H265":
        rtppay = Gst.ElementFactory.make("rtph265pay", "rtppay")
        print("Creating H265 rtppay")
    if not rtppay:
        sys.stderr.write(" Unable to create rtppay")

    # sink = Gst.ElementFactory.make("udpsink", "udpsink"+str(i))
    # if not sink:
    #     sys.stderr.write(" Unable to create udpsink")

    # # sink.set_property("host", "224.224.255.255")
    # sink.set_property("host", "127.0.0.1")
    # sink.set_property("port", udp_port)
    # sink.set_property("async", False)
    # sink.set_property("sync", 0) # inportant for live source

    h264parse = Gst.ElementFactory.make("h264parse", "h264parse"+str(i))
    
    sink = Gst.ElementFactory.make("rtspclientsink", "rtspclientsink" + str(i))
    if not sink:
        sys.stderr.write(" Unable to create rtspclientsink")

    stream_udpport_mapping[uri_stream_path] = udp_port
    udp_port +=1

    media_mtx_url = f"rtsp://localhost:8554/outstream{i}"  # Replace <MEDIAMTX_SERVER_IP> with actual IP or hostname
    sink.set_property("location", media_mtx_url)
    sink.set_property("protocols","tcp")

    pipeline.add(nvvidconv_postosd)
    pipeline.add(caps)
    pipeline.add(encoder)
    pipeline.add(rtppay)
    pipeline.add(h264parse)
    pipeline.add(sink)

    nvdsosd.link(nvvidconv_postosd)
    nvvidconv_postosd.link(caps)
    caps.link(encoder)
    encoder.link(h264parse)
    h264parse.link(sink)

print("Linking elements in the Pipeline \n")
# create an event loop and feed gstreamer bus mesages to it
loop = GLib.MainLoop()
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect("message", bus_call, loop)

pgie_src_pad = pgie.get_static_pad("src")
# if not pgie_src_pad:
#     sys.stderr.write(" Unable to get src pad \n")
# else:
#     pgie_src_pad.add_probe(Gst.PadProbeType.BUFFER, pgie_src_pad_buffer_probe, 0)
#     # perf callback function to print fps every 5 sec
#     GLib.timeout_add(5000, perf_data.perf_print_callback)


# Lets add probe to get informed of the meta data generated, we add probe to
# the sink pad of the osd element, since by that time, the buffer would have
# had got all the metadata.
osdsinkpad = nvdsosd.get_static_pad("sink")
if not osdsinkpad:
    sys.stderr.write(" Unable to get sink pad of nvosd \n")

osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)

print("Starting pipeline \n")

# setup rtsp server
rtsp_server = setup_rtsp_server(rtsp_port_num=rtsp_port)

print(f"{stream_udpport_mapping = }")
for ix, uri in enumerate(args):
    uri_stream_path = uri.split("/")[-1]
    out_stream_path = f"/outstream{ix}"

    stream_udp_port = stream_udpport_mapping[uri_stream_path]
    # create_rtsp_out(rtsp_server=rtsp_server, udp_port=stream_udp_port, stream_path=out_stream_path)
    print(f"{stream_udp_port = }, {out_stream_path = }")
    # udp_port +=1

# start play back and listed to events
pipeline.set_state(Gst.State.PLAYING)

try:
    loop.run()
except:
    pass
# cleanup
print("Exiting app\n")
pipeline.set_state(Gst.State.NULL)

def parse_args():
parser = argparse.ArgumentParser(prog=“deepstream_demux_multi_in_multi_out.py”,
description=“deepstream-demux-multi-in-multi-out takes multiple URI streams as input”
“and uses nvstreamdemux to split batches and output separate buffer/streams”)
parser.add_argument(
“-i”,
“–input”,
help=“Path to input streams”,
nargs=“+”,
metavar=“URIs”,
default=[“a”],
required=True,
)
if len(sys.argv)==1:
parser.print_help(sys.stderr)
sys.exit(1)
args = parser.parse_args()

global stream_path
stream_paths = args.input
return stream_paths

if name == “main”:
stream_paths = parse_args()
sys.exit(main(stream_paths))

  1. can you model detect 5 classes? Noticing only one classid=0 is detected, are the labels and bbox are correct?
  2. if nms-iou-threshold and pre-cluster-threshold are set to 0, will the results be correct?
  1. yes, it is correct and also i have tried yolov8n pretrained model which has 80 classes in it, i had choose 5 classes from it, person, motorcycle, car, bicycle and truck. but it is detecting only person
  2. yes, i tried for another detection model, even here only class_id 0 is getting detected
  3. the config file
    [application]
    enable-perf-measurement=1
    perf-measurement-interval-sec=5

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
onnx-file=/opt/nvidia/deepstream/deepstream-7.0/sources/deepstream_python_apps/apps/aiis-deepstream-pipeline/modelss/best.onnx
model-engine-file=/opt/nvidia/deepstream/deepstream-7.0/sources/deepstream_python_apps/apps/aiis-deepstream-pipeline/modelss/best.engine
#int8-calib-file=calib.table
labelfile-path=/opt/nvidia/deepstream/deepstream-7.0/sources/deepstream_python_apps/apps/aiis-deepstream-pipeline/modelss/catanddog.txt
batch-size=1
network-mode=0
num-detected-classes=2
interval=0
gie-unique-id=2
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=1
#workspace-size=2000
parse-bbox-func-name=NvDsInferParseYolo
#parse-bbox-func-name=NvDsInferParseYoloCuda
custom-lib-path=/opt/nvidia/deepstream/deepstream-7.0/sources/deepstream_python_apps/apps/aiis-deepstream-pipeline/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
nms-iou-threshold=0
pre-cluster-threshold=0
topk=300

and the output
Frame Number=0 Number of Objects=18 Cat_count=18 Dog_count=0
Frame Number=1 Number of Objects=18 Cat_count=18 Dog_count=0
Frame Number=2 Number of Objects=18 Cat_count=18 Dog_count=0
Frame Number=3 Number of Objects=20 Cat_count=20 Dog_count=0
Frame Number=4 Number of Objects=19 Cat_count=19 Dog_count=0
Frame Number=5 Number of Objects=18 Cat_count=18 Dog_count=0
Frame Number=6 Number of Objects=20 Cat_count=20 Dog_count=0
Frame Number=7 Number of Objects=21 Cat_count=21 Dog_count=0
Frame Number=8 Number of Objects=21 Cat_count=21 Dog_count=0
Frame Number=9 Number of Objects=22 Cat_count=22 Dog_count=0
Frame Number=10 Number of Objects=21 Cat_count=21 Dog_count=0
Frame Number=11 Number of Objects=25 Cat_count=25 Dog_count=0
Frame Number=12 Number of Objects=22 Cat_count=22 Dog_count=0
Frame Number=13 Number of Objects=21 Cat_count=21 Dog_count=0
Frame Number=14 Number of Objects=17 Cat_count=17 Dog_count=0
Frame Number=15 Number of Objects=15 Cat_count=15 Dog_count=0
Frame Number=16 Number of Objects=18 Cat_count=18 Dog_count=0
Frame Number=17 Number of Objects=22 Cat_count=22 Dog_count=0
Frame Number=18 Number of Objects=20 Cat_count=20 Dog_count=0
Frame Number=19 Number of Objects=24 Cat_count=24 Dog_count=0
Frame Number=20 Number of Objects=21 Cat_count=21 Dog_count=0
Frame Number=21 Number of Objects=15 Cat_count=15 Dog_count=0
Frame Number=22 Number of Objects=15 Cat_count=15 Dog_count=0
Frame Number=23 Number of Objects=17 Cat_count=17 Dog_count=0
Frame Number=24 Number of Objects=21 Cat_count=21 Dog_count=0
Frame Number=25 Number of Objects=23 Cat_count=23 Dog_count=0
Frame Number=26 Number of Objects=22 Cat_count=22 Dog_count=0
Frame Number=27 Number of Objects=20 Cat_count=20 Dog_count=0
Frame Number=28 Number of Objects=19 Cat_count=19 Dog_count=0
Frame Number=29 Number of Objects=24 Cat_count=24 Dog_count=0
Frame Number=30 Number of Objects=22 Cat_count=22 Dog_count=0
Frame Number=31 Number of Objects=19 Cat_count=19 Dog_count=0
Frame Number=32 Number of Objects=18 Cat_count=18 Dog_count=0
Frame Number=33 Number of Objects=19 Cat_count=19 Dog_count=0
Frame Number=34 Number of Objects=19 Cat_count=19 Dog_count=0
Frame Number=35 Number of Objects=17 Cat_count=17 Dog_count=0
Frame Number=36 Number of Objects=18 Cat_count=18 Dog_count=0
Frame Number=37 Number of Objects=29 Cat_count=29 Dog_count=0
Frame Number=38 Number of Objects=20 Cat_count=20 Dog_count=0
Frame Number=39 Number of Objects=21 Cat_count=21 Dog_count=0
Frame Number=40 Number of Objects=19 Cat_count=19 Dog_count=0
Frame Number=41 Number of Objects=23 Cat_count=23 Dog_count=0
Frame Number=42 Number of Objects=24 Cat_count=24 Dog_count=0
Frame Number=43 Number of Objects=26 Cat_count=26 Dog_count=0
Frame Number=44 Number of Objects=27 Cat_count=27 Dog_count=0
Frame Number=45 Number of Objects=26 Cat_count=26 Dog_count=0
Frame Number=46 Number of Objects=28 Cat_count=28 Dog_count=0
Frame Number=47 Number of Objects=30 Cat_count=30 Dog_count=0
Frame Number=48 Number of Objects=29 Cat_count=29 Dog_count=0
Frame Number=49 Number of Objects=28 Cat_count=28 Dog_count=0
Frame Number=50 Number of Objects=25 Cat_count=25 Dog_count=0
Frame Number=51 Number of Objects=30 Cat_count=30 Dog_count=0
Frame Number=52 Number of Objects=27 Cat_count=27 Dog_count=0
Frame Number=53 Number of Objects=24 Cat_count=24 Dog_count=0
Frame Number=54 Number of Objects=23 Cat_count=23 Dog_count=0
Frame Number=55 Number of Objects=23 Cat_count=23 Dog_count=0
Frame Number=56 Number of Objects=22 Cat_count=22 Dog_count=0
Frame Number=57 Number of Objects=28 Cat_count=28 Dog_count=0
Frame Number=58 Number of Objects=28 Cat_count=28 Dog_count=0
Frame Number=59 Number of Objects=25 Cat_count=25 Dog_count=0
Frame Number=60 Number of Objects=24 Cat_count=24 Dog_count=0
Frame Number=61 Number of Objects=24 Cat_count=24 Dog_count=0
Frame Number=62 Number of Objects=23 Cat_count=23 Dog_count=0
Frame Number=63 Number of Objects=30 Cat_count=30 Dog_count=0
Frame Number=64 Number of Objects=24 Cat_count=24 Dog_count=0
Frame Number=65 Number of Objects=21 Cat_count=21 Dog_count=0
Frame Number=66 Number of Objects=19 Cat_count=19 Dog_count=0
Frame Number=67 Number of Objects=18 Cat_count=18 Dog_count=0
Frame Number=68 Number of Objects=22 Cat_count=22 Dog_count=0
Frame Number=69 Number of Objects=24 Cat_count=24 Dog_count=0
Frame Number=70 Number of Objects=23 Cat_count=23 Dog_count=0
Frame Number=71 Number of Objects=26 Cat_count=26 Dog_count=0
Frame Number=72 Number of Objects=25 Cat_count=25 Dog_count=0
Frame Number=73 Number of Objects=27 Cat_count=27 Dog_count=0
Frame Number=74 Number of Objects=19 Cat_count=19 Dog_count=0
Frame Number=75 Number of Objects=20 Cat_count=20 Dog_count=0
Frame Number=76 Number of Objects=27 Cat_count=27 Dog_count=0
Frame Number=77 Number of Objects=25 Cat_count=25 Dog_count=0
Frame Number=78 Number of Objects=24 Cat_count=24 Dog_count=0
Frame Number=79 Number of Objects=24 Cat_count=24 Dog_count=0
Frame Number=80 Number of Objects=26 Cat_count=26 Dog_count=0
Frame Number=81 Number of Objects=23 Cat_count=23 Dog_count=0
Frame Number=82 Number of Objects=18 Cat_count=18 Dog_count=0

what do you mean by this? I mean, if nms-iou-threshold and pre-cluster-threshold are set to 0, are other classid are detected?
is this a custom model? did you verify the model by other tools? can the model detect 5 classes?

Hi Sir, The model which i used is yolov8n pretrained model, when i set the nms-iou-threshold and pre-cluster-threshold to 0, only the class (classid = 0) is getting detected and not other classes, I have inferenced and checked the .pt model and onnx format of yolov8n, it can able to detect the all classes person, car, truck, motorcycle. At first I thought the model only detecting person class, but later to check the issue, i have trained a custom model on two classes, cat and dog. but there the cat(classid=0) only is getting detected in pipeline, and the second class (dog( classid = 1)is getting ignored. I’m not sure, whether the issue is while exporting the model to onnx or the issue is in config file or in my parser file.

seems you have trained two new models. one can detect 2 classes. the other one can detect 5 classes. Let’s focus on the first model first.

  1. how did you train the model? by NVIDIA TAO? Have you verified the model can detect two classes with TAO tool or other tool? did you modify the output layer? you can use Netron to check.
  2. plese set num-detected-classes to 2 and correct the vd_labels.txt. and please set nms-iou-threshold and pre-cluster-threshold are set to 0 for debug.
  3. you can add log to print maxIndex in the postprocessing function NvDsInferParseYolo. please check if maxIndex is always 0.
  1. Hi Sir, I used a pretrained yolov8n model for 5 class detection, while exporting the model, i modified the output layers into boxes, scores and classes. i used Netron to check the layers. the onnx output layers are correct.
  2. yes i have set the num-detected-classes to 2 and corrected the vd_labels.txt and have set the threshold and pre-cluster-threshold are set to 0 to debug the output.

please refer to my last comment. Before testing in DeepStream, please invalidate the model can detect two classes. you can add log to print classes in postprocessing function NvDsInferParseYolo. please check if all classes are 0.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.