Creating parse-bbox for SSD-MobileNet model

Hello,

I want to use an onnx model developed by a colleague using the “deepstream-test1-usbcam” Python example. (The sample code is based on reading a caffemodel model and engine.)

We have been using this onnx model using detectNet but now want to switch to DeepStream:
net = jetson.inference.detectNet(argv=[’–model=/home/’+currentUser+’/jetson-inference/python/training/detection/ssd/models/’+folderName+’/ ssd-mobilenet.onnx ’, ‘–labels=/home/’+currentUser+’/jetson-inference/python/training/detection/ssd/models/’+folderName+’/labels.txt’, ‘–input-blob=input_0’, ‘–output-cvg=scores’, ‘–output-bbox=boxes’, ‘–threshold=0.20’ ])

I am having trouble making the appropriate changes to the pgie configuration .txt file in the “deepstream-test1-usbcam” example… specifically on making/selecting a parse bbox file.

I have tried using this: jetson-inference model with Deepstream in C++ :
https://elinux.org/index.php?title=Jetson/L4T/TRT_Customized_Example#Custom_Parser_for_SSD-MobileNet_Trained_by_Jetson-inference

I am getting this error:

0:00:05.398126144 17308     0x39f21720 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:724> [UID = 1]: Failed to parse bboxes using custom parse function
Segmentation fault (core dumped)

pgie file:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373

onnx-file=detection/ssd-mobilenet.onnx
model-engine-file=detection/ssd-mobilenet.onnx_b1_gpu0_fp16.engine
labelfile-path=detection/labels.txt
force-implicit-batch-dim=1
batch-size=1
## 0 = FP32, 1 = INT8, 2 = FP16 mode
network-mode=2
num-detected-classes=10
#interval=0
gie-unique-id=1
parse-bbox-func-name=NvDsInferParseCustomSSD
custom-lib-path=../../../objectDetector_SSD/nvdsinfer_custom_impl_ssd/libnvdsinfer_custom_impl_ssd.so

Environment

TensorRT Version : 7.1.3
CUDA Version : 10.2
Operating System + Version :
NVIDIA Tegra X1 (nvgpu)/integrated
ARMv8 Processor rev 1 (v8l) x4
Python Version (if applicable) : Python3

Hi,
This looks like a Jetson issue. We recommend you to raise it to the respective platform from the below link

Thanks!