Secondary-gie use onnx model

• Hardware Platform (Jetson / GPU): Jetson Nano
• DeepStream Version: 5.1
• JetPack Version (valid for Jetson only): 4.5.1[L4T 32.5.1]
• TensorRT Version: 7.1.3.0
• Issue Type( questions, new requirements, bugs): questions
Hi,
I use jetson-inference to train classification models and object detection models. The classification uses the resnet18 model, and Object Detection is an ssd-mobilenet model, which is converted to ONNX and placed in DeepStream for inference.
For primary-gies, refer to GitHub - neilyoung/nvdsinfer_custom_impl_onnx, the effect is good, but the inference result of the Classification model as the secondary-gie is not as expected. Don’t know where the problem is?
The following are my settings:

• main_config_file.txt
[primary-gie]
enable=1
gpu-id=0
batch-size=1
gie-unique-id=1
interval=1
config-file=config_infer_primary_ssd.txt
nvbuf-memory-type=0

[secondary-gie0]
enable=1
gpu-id=0
batch-size=1
gie-unique-id=2
interval=3
operate-on-gie-id=1
#operate-on-class-ids=1
config-file=config_infer_secondary_mcu.txt

• config_infer_primary_ssd.txt
[property]
net-scale-factor=0.0078431372
offsets=127.5;127.5;127.5
model-color-format=0
model-engine-file=models/ssd_model.onnx_b1_gpu0_fp16.engine
labelfile-path=models/ssd_labels.txt
onnx-file=models/ssd_model.onnx
infer-dims=3;300;300

#0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=3

output-blob-names=boxes;scores
parse-bbox-func-name=NvDsInferParseCustomONNX
custom-lib-path=nvdsinfer_custom_impl_onnx/libnvdsinfer_custom_impl_onnx.so
#scaling-filter=0
#scaling-compute-hw=0

[class-attrs-all]
pre-cluster-threshold=0.2
eps=0.2
group-threshold=1

• config_infer_secondary_mcu.txt
[property]
net-scale-factor=1

model-engine-file=models/resnet18_model.onnx_b1_gpu0_fp16.engine
labelfile-path=models/resnet18_labels.txt
onnx-file=models/resnet18_model.onnx

#force-implicit-batch-dim=1
batch-size=1
model-color-format=0
process-mode=2

infer-dims=3;224;224

#0=FP32, 1=INT8, 2=FP16 mode
network-mode=2

is-classifier=1

classifier-async-mode=1
classifier-threshold=0.2

input-object-min-width=128
input-object-min-height=128
#scaling-filter=0
#scaling-compute-hw=0

Thanks.

has you verified this classification model is well trained and can get good accuracy outside of DS?

Yes, after using jetson-inference to train the model, its accuracy is verified, and the photographed objects can be accurately verified. But there is no way to correctly identify it in DeepStream.

Ok, I think you may need to check the properties in sgie config file, especially the properties marked with bold below.
The meaning of these properties can refer to Gst-nvinfer — DeepStream 6.1.1 Release documentation

• config_infer_secondary_mcu.txt
[property]
net-scale-factor=1
offsets=…

model-engine-file=models/resnet18_model.onnx_b1_gpu0_fp16.engine
labelfile-path=models/resnet18_labels.txt
onnx-file=models/resnet18_model.onnx

#force-implicit-batch-dim=1
batch-size=1
model-color-format=0
process-mode=2

infer-dims=3;224;224

#0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
maintain-aspect-ratio=…
is-classifier=1

classifier-async-mode=1
classifier-threshold=0.2

input-object-min-width=128
input-object-min-height=128
#scaling-filter=0
#scaling-compute-hw=0
operate-on-gie-id=1 // equals to GIE id of PGIE

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.