How to make a parses function for my regression model

I have a regression model that has this output shape 1x12x16
I try to run it’s engine model with deepstream test1
and i got this erorr:
[UID = 1]: Could not find output coverage layer for parsing objects
[UID = 1]: Failed to parse bboxes

It’s not a detection or classification problem, it’s a regression how could i parse this output ??

• Hardware Platform (GPU)
• DeepStream 5
• TensorRT 7.0.0

1 Like

Currently DS only can support detection/classifier/segmentation models, so if your model doesn’t belong these types, you can set the network-type=100 and enable output-tensor-meta=1 , then nvinfer will attach tensor outputs as meta on GstBuffer, so you can access the output and parse it. You can refer deepstream_infer_tensor_meta_test.cpp for more details

How can I receive the full output vector at the test-1.py code ?
at which variable i mean ?

1 Like

I am working with python “test1”,
Is there any work around with python not C++ ?
because i can not find how to attach tensor outputs as meta on GstBuffer at test1 with python ?

You just need to enable the config item in nvinfer config file as my last comment, refer Gst-nvinfer — DeepStream 6.3 Release documentation for how to configure nvinfer.

Then in your app code, you can access the https://docs.nvidia.com/metropolis/deepstream/python-api/PYTHON_API/NvDsInfer/NvDsInferTensorMetaDoc.html like c/c++ code.

I have do the following at the deepstream-test1 python sample :
1- add those lines at the config
network-type=100
output-tensor-meta=1
2- I cast the l_frame.data to NvDsInferTensorMeta like this:

frame_meta2 = pyds.NvDsInferTensorMeta.cast(l_frame.data)

3- I tried also to process like the C++ code but on the l-frame not on the l_obj because it’s a regression model there’s no any object:

l_user = frame_meta.frame_user_meta_list
while l_user is not None:
    user_meta = pyds.NvDsUserMeta(l_user.data)

but the l_user is already None

I got the following output and i need any interpreting:

1- l_frame.data is a null object
2- frame_meta2.num_output_layers = 32758 # and this is change every run
3- frame_meta2.out_buf_ptrs_dev None
4- frame_meta2.out_buf_ptrs_host None

So i need to know the answer of those points:

  • The steps i done above are right ?
  • why the l_user give me a none ?
  • and I think there’s no parsing function at this situation, should i make a c++ parsing function then ?

Can you share your whole repro with us for further debug?

1 Like

Sure, here’s the script and the config file, but the model i will upload it and send it hrough the inbox

#!/usr/bin/env python3

################################################################################

Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.

Permission is hereby granted, free of charge, to any person obtaining a

copy of this software and associated documentation files (the “Software”),

to deal in the Software without restriction, including without limitation

the rights to use, copy, modify, merge, publish, distribute, sublicense,

and/or sell copies of the Software, and to permit persons to whom the

Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in

all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR

IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,

FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL

THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER

LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING

FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER

DEALINGS IN THE SOFTWARE.

################################################################################

import sys

sys.path.append(‘…/’)
import gi

gi.require_version(‘Gst’, ‘1.0’)
from gi.repository import GObject, Gst
from common.is_aarch_64 import is_aarch64
from common.bus_call import bus_call

import pyds

PGIE_CLASS_ID_VEHICLE = 0

PGIE_CLASS_ID_BICYCLE = 1

PGIE_CLASS_ID_PERSON = 2

PGIE_CLASS_ID_ROADSIGN = 3

def osd_sink_pad_buffer_probe(pad, info, u_data):
frame_number = 0
# Intiallizing object counter with 0.
obj_counter = {
PGIE_CLASS_ID_VEHICLE: 0,
# PGIE_CLASS_ID_PERSON: 0,
# PGIE_CLASS_ID_BICYCLE: 0,
# PGIE_CLASS_ID_ROADSIGN: 0
}
num_rects = 0

gst_buffer = info.get_buffer()
if not gst_buffer:
    print("Unable to get GstBuffer ")
    return

# Retrieve batch metadata from the gst_buffer
# Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
# C address of gst_buffer as input, which is obtained with hash(gst_buffer)
batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
l_frame = batch_meta.frame_meta_list
while l_frame is not None:
    print("######## frame is not None")
    try:
        # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
        # The casting is done by pyds.glist_get_nvds_frame_meta()
        # The casting also keeps ownership of the underlying memory
        # in the C code, so the Python garbage collector will leave
        # it alone.
        # frame_meta = pyds.glist_get_nvds_frame_meta(l_frame.data)
        print("##################l_frame.data", l_frame.data)
        frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)

        # The following is taken from C++ code
        l_user = frame_meta.frame_user_meta_list
        while l_user is not None:
            user_meta = pyds.NvDsUserMeta(l_user.data)
            print("inside luser") # it does not

        # trials
        # frame_meta2 = pyds.NvDsInferTensorMeta.cast(l_frame.data)
        # frame_meta3 = pyds.NvDsInferTensorMeta.cast(frame_meta)


    except StopIteration:
        break

    frame_number = frame_meta.frame_num
    num_rects = frame_meta.num_obj_meta


    # frame meta
    # print("###### frame_meta.display_meta_list", frame_meta.display_meta_list)
    # print("###### frame_meta.frame_user_meta_list ",  frame_meta.frame_user_meta_list)
    # print("###### frame_meta.num_obj_meta <num_rects> ", num_rects)
    # print("###### frame_meta.obj_meta_list",  frame_meta.obj_meta_list)


    l_obj = frame_meta.obj_meta_list
    # print("###### l_obj <frame_meta.obj_meta_list> is ", l_obj)
    while l_obj is not None:
        print("###### obj is not none")
        try:
            # Casting l_obj.data to pyds.NvDsObjectMeta
            # obj_meta=pyds.glist_get_nvds_object_m out_buf_ptrs_deveta(l_obj.data)
            obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)
            # print("######## obj meta is ", obj_meta)

        except StopIteration:
            break
        obj_counter[obj_meta.class_id] += 1
        obj_meta.rect_params.border_color.set(0.0, 0.0, 1.0, 0.0)
        try:
            l_obj = l_obj.next
        except StopIteration:
            break

    # Acquiring a display meta object. The memory ownership remains in
    # the C code so downstream plugins can still access it. Otherwise
    # the garbage collector will claim it when this probe function exits.
    display_meta = pyds.nvds_acquire_display_meta_from_pool(batch_meta)
    display_meta.num_labels = 1
    py_nvosd_text_params = display_meta.text_params[0]
    # Setting display text to be shown on screen
    # Note that the pyds module allocates a buffer for the string, and the
    # memory will not be claimed by the garbage collector.
    # Reading the display_text field here will return the C address of the
    # allocated string. Use pyds.get_string() to get the string content.
    py_nvosd_text_params.display_text = "Frame Number={} Number of Objects={} Vehicle_count={}".format(
        frame_number, num_rects, obj_counter[PGIE_CLASS_ID_VEHICLE])

    # Now set the offsets where the string should appear
    py_nvosd_text_params.x_offset = 10
    py_nvosd_text_params.y_offset = 12

    # Font , font-color and font-size
    py_nvosd_text_params.font_params.font_name = "Serif"
    py_nvosd_text_params.font_params.font_size = 10
    # set(red, green, blue, alpha); set to White
    py_nvosd_text_params.font_params.font_color.set(1.0, 1.0, 1.0, 1.0)

    # Text background color
    py_nvosd_text_params.set_bg_clr = 1
    # set(red, green, blue, alpha); set to Black
    py_nvosd_text_params.text_bg_clr.set(0.0, 0.0, 0.0, 1.0)
    # Using pyds.get_string() to get display_text as string
    print(pyds.get_string(py_nvosd_text_params.display_text))
    pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)
    try:
        l_frame = l_frame.next
    except StopIteration:
        break

return Gst.PadProbeReturn.OK

def main(args):
# Check input arguments
if len(args) != 2:
sys.stderr.write(“usage: %s \n” % args[0])
sys.exit(1)

# Standard GStreamer initialization
GObject.threads_init()
Gst.init(None)

# Create gstreamer elements
# Create Pipeline element that will form a connection of other elements
print("Creating Pipeline \n ")
pipeline = Gst.Pipeline()

if not pipeline:
    sys.stderr.write(" Unable to create Pipeline \n")

# Source element for reading from the file
print("Creating Source \n ")
source = Gst.ElementFactory.make("filesrc", "file-source")
if not source:
    sys.stderr.write(" Unable to create Source \n")

# Since the data format in the input file is elementary h264 stream,
# we need a h264parser
print("Creating H264Parser \n")
h264parser = Gst.ElementFactory.make("h264parse", "h264-parser")
if not h264parser:
    sys.stderr.write(" Unable to create h264 parser \n")

# Use nvdec_h264 for hardware accelerated decode on GPU
print("Creating Decoder \n")
decoder = Gst.ElementFactory.make("nvv4l2decoder", "nvv4l2-decoder")
if not decoder:
    sys.stderr.write(" Unable to create Nvv4l2 Decoder \n")

# Create nvstreammux instance to form batches from one or more sources.
streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
if not streammux:
    sys.stderr.write(" Unable to create NvStreamMux \n")

# Use nvinfer to run inferencing on decoder's output,
# behaviour of inferencing is set through config file
pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
if not pgie:
    sys.stderr.write(" Unable to create pgie \n")

# Use convertor to convert from NV12 to RGBA as required by nvosd
nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
if not nvvidconv:
    sys.stderr.write(" Unable to create nvvidconv \n")

# Create OSD to draw on the converted RGBA buffer
nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")

if not nvosd:
    sys.stderr.write(" Unable to create nvosd \n")

# Finally render the osd output
if is_aarch64():
    # transform = Gst.ElementFactory.make("nvegltransform", "nvegl-transform")
    transform = Gst.ElementFactory.make("queue", "queue")

print("Creating EGLSink \n")
sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")
# sink = Gst.ElementFactory.make("nvoverlaysink", "nvvideo-renderer")
sink.set_property('sync', 0)
if not sink:
    sys.stderr.write(" Unable to create egl sink \n")

print("Playing file %s " % args[1])
source.set_property('location', args[1])
streammux.set_property('width', 1920)
streammux.set_property('height', 1080)
streammux.set_property('batch-size', 1)
streammux.set_property('batched-push-timeout', 4000000)
streammux.set_property('live-source', 1)
pgie.set_property('config-file-path', "dstest1_pgie_config.txt")

print("Adding elements to Pipeline \n")
pipeline.add(source)
pipeline.add(h264parser)
pipeline.add(decoder)
pipeline.add(streammux)
pipeline.add(pgie)
pipeline.add(nvvidconv)
pipeline.add(nvosd)
pipeline.add(sink)
if is_aarch64():
    pipeline.add(transform)

# we link the elements together
# file-source -> h264-parser -> nvh264-decoder ->
# nvinfer -> nvvidconv -> nvosd -> video-renderer
print("Linking elements in the Pipeline \n")
source.link(h264parser)
h264parser.link(decoder)

sinkpad = streammux.get_request_pad("sink_0")
if not sinkpad:
    sys.stderr.write(" Unable to get the sink pad of streammux \n")
srcpad = decoder.get_static_pad("src")
if not srcpad:
    sys.stderr.write(" Unable to get source pad of decoder \n")
srcpad.link(sinkpad)
streammux.link(pgie)
pgie.link(nvvidconv)
nvvidconv.link(nvosd)
if is_aarch64():
    nvosd.link(transform)
    transform.link(sink)
else:
    nvosd.link(sink)

# create an event loop and feed gstreamer bus mesages to it
loop = GObject.MainLoop()
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect("message", bus_call, loop)

# Lets add probe to get informed of the meta data generated, we add probe to
# the sink pad of the osd element, since by that time, the buffer would have
# had got all the metadata.
osdsinkpad = nvosd.get_static_pad("sink")
if not osdsinkpad:
    sys.stderr.write(" Unable to get sink pad of nvosd \n")

osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)

# start play back and listen to events
print("Starting pipeline \n")
pipeline.set_state(Gst.State.PLAYING)
try:
    loop.run()
except:
    pass
# cleanup
pipeline.set_state(Gst.State.NULL)

if name == ‘main’:
sys.exit(main(sys.argv))


Config file
################################################################################

Copyright (c) 2018-2020, NVIDIA CORPORATION. All rights reserved.

Permission is hereby granted, free of charge, to any person obtaining a

copy of this software and associated documentation files (the “Software”),

to deal in the Software without restriction, including without limitation

the rights to use, copy, modify, merge, publish, distribute, sublicense,

and/or sell copies of the Software, and to permit persons to whom the

Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in

all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR

IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,

FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL

THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER

LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING

FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER

DEALINGS IN THE SOFTWARE.

################################################################################

Following properties are mandatory when engine files are not specified:

int8-calib-file(Only in INT8)

Caffemodel mandatory properties: model-file, proto-file, output-blob-names

UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names

ONNX: onnx-file

Mandatory properties for detectors:

num-detected-classes

Optional properties for detectors:

cluster-mode(Default=Group Rectangles), interval(Primary mode only, Default=0)

custom-lib-path,

parse-bbox-func-name

Mandatory properties for classifiers:

classifier-threshold, is-classifier

Optional properties for classifiers:

classifier-async-mode(Secondary mode only, Default=false)

Optional properties in secondary mode:

operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),

input-object-min-width, input-object-min-height, input-object-max-width,

input-object-max-height

Following properties are always recommended:

batch-size(Default=1)

Other optional properties:

net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),

model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,

mean-file, gie-unique-id(Default=0), offsets, process-mode (Default=1 i.e. primary),

custom-lib-path, network-mode(Default=0 i.e FP32)

The values in the config file are overridden by values set through GObject

properties.

[property]
gpu-id=0
#is-classifier=1 #use this only when network-ype=1
net-scale-factor=0.0039215697906911373
model-engine-file=./models/crowd_count_model.engine
labelfile-path=label_map.txt
force-implicit-batch-dim=1
batch-size=1
network-mode=1
network-type=100
output-tensor-meta=1
num-detected-classes=1
interval=0
#output-tensor-meta=TRUE
gie-unique-id=1
output-blob-names=output
#scaling-filter=0
#scaling-compute-hw=0
#parse-classifier-func-name=NvDsInferClassiferParseCustomSoftmax
#custom-lib-path=/opt/nvidia/deepstream/deepstream-5.0/sources/libs/nvdsinfer_customparser_crowd_count/libnvds_infercustomparser.so
[class-attrs-all]
pre-cluster-threshold=0.2
eps=0.2
group-threshold=1

Thanks, will check it.

1 Like

Hey, I had requested the google drive access and can you approve it?

1 Like

I did not receive any requests but i sent the link again with full access permissions, so please tell me if it does not work ^^

Yeah, I can download the model now. BTW, could you re-edit your python source code or just upload the .py file .

Hey, what’s platform are you using, I cannot deserialize your engine file on my T4 device.

here’s the code and config:
https://drive.google.com/drive/folders/1toKiiwTUIbZhfz-t51zjGVL0kD9B8bz1?usp=sharing

Thanks, I can repro it now.
Ignore my last comment.

1 Like

My GPU is: 2060super
and I am working inside deepstream docker
and converted my model to engine thorough TRT 7
and here’s a snippets from what i got when running deepstream test1

Okai, great ^^

Any specific for the model’s input stream, I mean which stream should we use, I tried sample_720p.h264 inside DS package and the user meta always null.

yes I am using this sample,
and yes i am also have the user meta = null,
that’s why I am told u above that I’ve made what u told me about the config and about the python code,
So please i want you to:
1- check if what i’ve done inside the code and inside the config is what you want?
2- If there’s something wrong, tell me what should i add or do to retrieve the output vector (1x11x20) as itself ??
3- Why meta is equal to null??
4- Note that i am not using any custom c++ parsing functions

Config and code looks good, but seems the model’s output is not attached to gst_buffer or the model’s output is empty or somthing else, I will look into nvinfer code to address it. But you need to make sure the input stream inside DS package can be used for your model, I’m not familiar with your model.