Creating parse-bbox

Hello,

I’m new to using the Nvidia Jetson Nano and DeepStream 5.1.

I want to use an onnx model developed by colleague using the “deepstream-test1-usbcam” Python example.

We have been using this model using detectNet but now want to switch to DeepStream:
net = jetson.inference.detectNet(argv=[‘–model=/home/’+currentUser+‘/jetson-inference/python/training/detection/ssd/models/’+folderName+‘/ssd-mobilenet.onnx’, ‘–labels=/home/’+currentUser+‘/jetson-inference/python/training/detection/ssd/models/’+folderName+‘/labels.txt’, ‘–input-blob=input_0’, ‘–output-cvg=scores’, ‘–output-bbox=boxes’, ‘–threshold=0.20’])

I am having trouble making the appropriate changes to the pgie configuration .txt file in the “deepstream-test1-usbcam” example… specifically on making/selecting a parse bbox file.
(The sample code is based on reading a caffemodel model and engine.)

Error:

0:00:05.089327290 14198     0x3b39b320 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:724> [UID = 1]: Failed to parse bboxes using custom parse function
Segmentation fault (core dumped)


Is there a general config file that you can recommend?

Or are there instructions on how to make the config file for onnx?

Environment

TensorRT Version: 7.1.3
CUDA Version: 10.2
Operating System + Version:
NVIDIA Tegra X1 (nvgpu)/integrated
ARMv8 Processor rev 1 (v8l) x4
Python Version (if applicable): Python3

Relevant Files

deepstream_python_apps/apps/deepstream-test1-usbcam at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub

deepstream_python_apps/dstest1_pgie_config.txt at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub

Hi,
This looks like a Jetson issue. We recommend you to raise it to the respective platform from the below link

Thanks!

Hi,

We have implemented an example for jetson-inference model with Deepstream in C++ interface below:
https://elinux.org/index.php?title=Jetson/L4T/TRT_Customized_Example#Custom_Parser_for_SSD-MobileNet_Trained_by_Jetson-inference

Could you give it a check first?
Thanks.

Hello,

I got the custom parser you linked above, but I am still getting this error.

I am using an SSD-MobileNet model.

From the config file, can you see if I am doing something wrong?

0:00:05.398126144 17308     0x39f21720 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:724> [UID = 1]: Failed to parse bboxes using custom parse function
Segmentation fault (core dumped)
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373

onnx-file=detection/ssd-mobilenet.onnx
model-engine-file=detection/ssd-mobilenet.onnx_b1_gpu0_fp16.engine
labelfile-path=detection/labels.txt
force-implicit-batch-dim=1
batch-size=1
## 0 = FP32, 1 = INT8, 2 = FP16 mode
network-mode=2
num-detected-classes=10
#interval=0
gie-unique-id=1
parse-bbox-func-name=NvDsInferParseCustomSSD
custom-lib-path=../../../objectDetector_SSD/nvdsinfer_custom_impl_ssd/libnvdsinfer_custom_impl_ssd.so```

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Hi,

We want to check this in our environment.
Could you share the model and the corresponding configure with us?

Thanks.