Deepstream onnx model creates no detections

Please provide complete information as applicable to your setup.

• Jetson Xavier NX
• DeepStream V5.1
• JetPack Version 4.5.1
• Unable to render detections with custom onnx model export from pytorch

I’ve created a custom detection dataset following the jetson-inference from dustynv.

When I run the detectnet program from the above example I am able to detect my trained objects. I’ve since tried to import the onnx model into the usb deepstream example.

Below if my pgie configuration file:

################################################################################
# Copyright (c) 2018-2020, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
################################################################################

# Following properties are mandatory when engine files are not specified:
#   int8-calib-file(Only in INT8)
#   Caffemodel mandatory properties: model-file, proto-file, output-blob-names
#   UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names
#   ONNX: onnx-file
#
# Mandatory properties for detectors:
#   num-detected-classes
#
# Optional properties for detectors:
#   cluster-mode(Default=Group Rectangles), interval(Primary mode only, Default=0)
#   custom-lib-path,
#   parse-bbox-func-name
#
# Mandatory properties for classifiers:
#   classifier-threshold, is-classifier
#
# Optional properties for classifiers:
#   classifier-async-mode(Secondary mode only, Default=false)
#
# Optional properties in secondary mode:
#   operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),
#   input-object-min-width, input-object-min-height, input-object-max-width,
#   input-object-max-height
#
# Following properties are always recommended:
#   batch-size(Default=1)
#
# Other optional properties:
#   net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),
#   model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,
#   mean-file, gie-unique-id(Default=0), offsets, process-mode (Default=1 i.e. primary),
#   custom-lib-path, network-mode(Default=0 i.e FP32)
#
# The values in the config file are overridden by values set through GObject
# properties.

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
onnx-file=./models/ssd-mobilenet.onnx
model-engine-file=./models/ssd-mobilenet.onnx_b1_gpu0_fp32.engine
labelfile-path=./models/labels.txt
batch-size=1
network-mode=0
num-detected-classes=2
model-color-format=0
interval=0
gie-unique-id=1
classifier-threshold=0.01
network-type=1
process-mode=1
output-blob-names=scores;boxes
parse-bbox-func-name=NvDsInferParseCustomONNX
custom-lib-path=/opt/nvidia/deepstream/deepstream-5.1/sources/nvdsinfer_custom_impl_onnx/nvd/libnvdsinfer_custom_impl_onnx.so

[class-attrs-all]
pre-cluster-threshold=0.2
eps=0.2
group-threshold=1

I am able to start the script and have it run the video feed however no detection’s are ever reported. Below if the output from the script startup:

Creating Pipeline 
 
Creating Source 
 
Creating Video Converter 

Creating EGLSink 

Playing cam /dev/video0 
Adding elements to Pipeline 

Linking elements in the Pipeline 

Starting pipeline 


Using winsys: x11 
0:00:04.969834401 18149   0x7f74003ea0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 1]: deserialized trt engine from :/home/drone1/Documents/deepstream-test1-usbcam/models/ssd-mobilenet.onnx_b1_gpu0_fp32.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_0         3x300x300       
1   OUTPUT kFLOAT scores          3000x2          
2   OUTPUT kFLOAT boxes           3000x4          

0:00:04.970064803 18149   0x7f74003ea0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1806> [UID = 1]: Use deserialized engine model: /home/drone1/Documents/deepstream-test1-usbcam/models/ssd-mobilenet.onnx_b1_gpu0_fp32.engine
0:00:04.978146033 18149   0x7f74003ea0 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully

I followed this thread as an example however unlike in that thread I am not able to get any reported detections.

I am hoping for some guidance on what is required in the pgie configuration file in order to properly import an onnx model or guidance on how to properly build an onnx export for deepstream.

Thanks for your assistance!

Hi,

Please note that we have a new Deepstream 6.0 release already.
It’s recommended to move to the latest software first.

The model from jetson-inference requires a custom output parser.
Not sure which model do you use. But you can find an example for SSD MobileNet in the below wiki page:
https://elinux.org/Jetson/L4T/TRT_Customized_Example#Custom_Parser_for_SSD-MobileNet_Trained_by_Jetson-inference

Thanks.

Thanks for your response.

I’m already using a custom parser:

parse-bbox-func-name=NvDsInferParseCustomONNX
custom-lib-path=/opt/nvidia/deepstream/deepstream-5.1/sources/nvdsinfer_custom_impl_onnx/nvd/libnvdsinfer_custom_impl_onnx.so

I’ll look into upgrading to version 6.0 to see if that helps.

Hi,

Please note that the parser is model-specific rather than format specific.
After you upgrading to the Deepstream 6.0, please try the custom parser shared in the above link.

Thanks.

I was able to get the model to run using the link you provided and with deepstream 6.0. Thanks for your assistance!

Good to know it works!
Thanks for the update.

I’m still having an issue with the model loaded. The model loads however the detection’s do not appear to function in any usable capacity. I followed the pgie configuration from the link and I’ve tried adjusting detection threshold values and detection sizes but the output just appears to be noise.

Here is a screenshot of the model with jetson inference detectnet:

Hoping you can provide some guidance on the parameters that are required to be adjusted to optimize my models detection for deepstream.

Thanks again!

Hi,

Could you check if the input pre-processing from jetson-inference and deepstream are identical first?
This is controlled by the net-scale-factor and offsets flag in the configuration file.

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html#gst-nvinfer

y = net scale factor * (x-mean)

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.