Onnx file config

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi this is my config file, I need help with the parser bbox function, and custom-lib-path for onnx::

################################################################################

SPDX-FileCopyrightText: Copyright (c) 2019-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.

SPDX-License-Identifier: Apache-2.0

Licensed under the Apache License, Version 2.0 (the “License”);

you may not use this file except in compliance with the License.

You may obtain a copy of the License at

Apache License, Version 2.0

Unless required by applicable law or agreed to in writing, software

distributed under the License is distributed on an “AS IS” BASIS,

WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and

limitations under the License.

################################################################################

Following properties are mandatory when engine files are not specified:

int8-calib-file(Only in INT8)

Caffemodel mandatory properties: model-file, proto-file, output-blob-names

UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names

ONNX: onnx-file

Mandatory properties for detectors:

num-detected-classes

Optional properties for detectors:

cluster-mode(Default=Group Rectangles), interval(Primary mode only, Default=0)

custom-lib-path,

parse-bbox-func-name

Mandatory properties for classifiers:

classifier-threshold, is-classifier

Optional properties for classifiers:

classifier-async-mode(Secondary mode only, Default=false)

Optional properties in secondary mode:

operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),

input-object-min-width, input-object-min-height, input-object-max-width,

input-object-max-height

Following properties are always recommended:

batch-size(Default=1)

Other optional properties:

net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),

model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,

mean-file, gie-unique-id(Default=0), offsets, process-mode (Default=1 i.e. primary),

custom-lib-path, network-mode(Default=0 i.e FP32)

The values in the config file are overridden by values set through GObject

properties.

[property]
gpu-id=0
net-scale-factor=1
onnx-file=/opt/nvidia/deepstream/deepstream-6.4/models/assets/age/age_normal_opset13.onnx

model-engine-file=/opt/nvidia/deepstream/deepstream-6.4/models/assets/age/normal_age_gf_rtx.engine
labelfile-path=/opt/nvidia/deepstream/deepstream-6.4/models/assets/age/normal_age_label.txt
force-implicit-batch-dim=1
batch-size=1

0=FP32 and 1=INT8 mode

network-mode=2
input-object-min-width=5
input-object-min-height=5
process-mode=2
model-color-format=1
gpu-id=0
gie-unique-id=4
operate-on-gie-id=2
#operate-on-class-ids=0
#is-classifier=1
interval=0

output-blob-names=dense_2
#parse-bbox-func-name=NvDsInferParseOnnx
custom-lib-path=/opt/nvidia/deepstream/deepstream-6.4/sources/libs/nvdsinfer_customparser/libnvds_infercustomparser.so

classifier-async-mode=0
classifier-threshold=0.00001
process-mode=2
#scaling-filter=0
#scaling-compute-hw=0

output-tensor-meta=1
network-type=1

ERROR: [TRT]: ModelImporter.cpp:777: ERROR: ModelImporter.cpp:547 In function importModel:
[4] Assertion failed: !mImporterCtx.network()->hasImplicitBatchDimension() && “This version of the ONNX parser only supports TensorRT INetworkDefinitions with an explicit batch dimension. Please ensure the network was created using the EXPLICIT_BATCH NetworkDefinitionCreationFlag.”
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:315 Failed to parse onnx file
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:971 failed to build network since parsing model errors.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:804 failed to build network.

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used

GPU
deepstream-6.4
TRT version 8.6
deriver verion 545
reproduce the issue by replacing my config that I have mentioned.

What will be respective values for the above 2 variables.

Please share the sample code and model that reproduce the problem. I can’t get anything from your answer.

This is my detector model config trained using yolo, and this is my config:
################################################################################

SPDX-FileCopyrightText: Copyright (c) 2019-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.

SPDX-License-Identifier: Apache-2.0

Licensed under the Apache License, Version 2.0 (the “License”);

you may not use this file except in compliance with the License.

You may obtain a copy of the License at

Apache License, Version 2.0

Unless required by applicable law or agreed to in writing, software

distributed under the License is distributed on an “AS IS” BASIS,

WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and

limitations under the License.

################################################################################

Following properties are mandatory when engine files are not specified:

int8-calib-file(Only in INT8)

Caffemodel mandatory properties: model-file, proto-file, output-blob-names

UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names

ONNX: onnx-file

Mandatory properties for detectors:

num-detected-classes

Optional properties for detectors:

cluster-mode(Default=Group Rectangles), interval(Primary mode only, Default=0)

custom-lib-path,

parse-bbox-func-name

Mandatory properties for classifiers:

classifier-threshold, is-classifier

Optional properties for classifiers:

classifier-async-mode(Secondary mode only, Default=false)

Optional properties in secondary mode:

operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),

input-object-min-width, input-object-min-height, input-object-max-width,

input-object-max-height

Following properties are always recommended:

batch-size(Default=1)

Other optional properties:

net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),

model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,

mean-file, gie-unique-id(Default=0), offsets, process-mode (Default=1 i.e. primary),

custom-lib-path, network-mode(Default=0 i.e FP32)

The values in the config file are overridden by values set through GObject

properties.

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
custom-network-config=/opt/nvidia/deepstream/deepstream-6.4/models/assets/normalhead/head.cfg
model-file=/opt/nvidia/deepstream/deepstream-6.4/models/assets/normalhead/yolov4-tiny_best.weights
model-engine-file=/opt/nvidia/deepstream/deepstream-6.4/headface.engine
labelfile-path=/opt/nvidia/deepstream/deepstream-6.4/models/assets/normalhead/head.txt

force-implicit-batch-dim=1
batch-size=1
network-mode=2
process-mode=1
model-color-format=0
num-detected-classes=1
interval=0
gie-unique-id=1
#operate-on-class-ids=0
#operate-on-gie-id=1
output-blob-names=num_detections;detection_boxes;detection_scores;detection_classes
#scaling-filter=0
#scaling-compute-hw=0
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=/opt/nvidia/deepstream/deepstream-6.4/DeepStream-Yolo/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet
#output-tensor-meta=0
#network-type=1

[class-attrs-all]
pre-cluster-threshold=0.1
eps=0.2
group-threshold=1

I am looking for onnx config, parameters
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=/opt/nvidia/deepstream/deepstream-6.4/DeepStream-Yolo/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

these 3 variables, what should be the respective values, if I am using a ONNX file.

Just help me with these 3 values, if I provide a onnx-file

Here is a deepstream sample of yolov4. This is not just a problem of configuration files.

I am not looking for yolo to onnx,
I am looking for onnx(tf2onnx) to engine config file values

I want to infer using my custom onnx model, it is a classification, not detection.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.