Deploy TRT Object Detection model (Mobilenetv2) with Deepstream Error:"Failed to parse bboxes"

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 5.0.1-20.09-triton
• TensorRT Version 7.0.0
• NVIDIA GPU Driver Version (valid for GPU only) 460.32.03
• Issue Type( questions, new requirements, bugs) questions

Hi guys,

at the moment i’m trying to implement the Mobilnetv2 Object Detection Model from the TF1 Model Zoo.
I used the way over the UFF Parser. With the UFF Parser I created a UFF Model -which seems to be ok- and a TRT model. (trtexec_verbose_log.txt (1.0 MB) )

Next I tried to use the trt.engine in Deepstream for inference and created some .config files and used the sample configs from Deepstream as a guide. I’m certain that I have made some mistakes in the .configs files because i get some errors after a window opened for a short time and than it disappears.

I think I have to choose or implement my own parser for the bounding boxes. I found in the folder sources/objectDetector_SSD/ a custom nvdsparsebbox_ssd.cpp file but I think these example is for an Inceptionv1 or v2 model. I don’t think that I can use it for my problem.

Question

Do I have to write my own parser for the bounding boxes to get rid of the Error or is there an easier solution?

Or is my TRT Model not converted properly?

Error

0:00:01.691210145 71 0x562e09447680 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:59> [UID = 1]: Could not find output coverage layer for parsing objects

0:00:01.691241895 71 0x562e09447680 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:734> [UID = 1]: Failed to parse bboxes Segmentation fault (core dumped)

Config_Files
config_infer_primary_test.txt (3.2 KB)
source_1080p_dec_infer_mobilenetv2_tf.txt (4.3 KB)

Cheers

Hi,

The error occurs since the customized parser information doesn’t be added to the configure file.

Without this information, Deepstream will parse the mobilenet output with a standard detection parser.
And will expect a ‘converge’ and ‘bboxes’ output layer, which is not available in your model.

To solve this, please add the following information into the config_infer_primary_test.txt, and try it again.

[property]
..
force-implicit-batch-dim=1
parse-bbox-func-name=NvDsInferParseCustomSSD
custom-lib-path=nvdsinfer_custom_impl_ssd/libnvdsinfer_custom_impl_ssd.so

Here is a related comment for your reference:

Thanks.

Thanks for the quick reply @AastaLLL .

You solution works but I had to more than just copy the lines in the .config file.

Under “Workflow” everyone can find my way to the working model. The workflow might be hard to understand and to follow on some parts especially because I used Deepstream5.0 in an Docker environment. Therefor the workflow can change if you don’t use Docker.
I hope that my post can help others to get Mobilenetv2 also working in Deepstream and TensorRT. Also you can find some files in the post. I’m aware that you do the conversion prettier but it is working for me.

Important

I use the Deepstream5.0 Docker Container (deepstream:5.0.1-20.09-triton) and Tensorflow1.15


Workflow

  1. Download the Mobilenetv2 Model from the Tensorflow 1 Model Zoo
  2. I created 2 Python Files for the conversion of the frozen_inference_graph.pb into a TensorRT Model with the UFF Parser (See Update 2 please!!!)

For the conversion into an UFF Model
First File: uff_converter.py (2.9 KB)

For the conversion I used the Deepstream5.0 Docker Container but you need to install Tensorflow1 (I used 1.15.5) inside the container. Therefore you have to install pip for python3. Than you can install tensorflow-gpu==1.15.5 for example. With Tensorflow you get the graphsurgeon and uff libary and you can convert the .pb model to .uff with the .py script.

You can also use the TensorRT Docker Container but careful you have to use TensorRT7.0.0 Container because the Deepstream5.0 Container uses Tensor7.0.0. If you don’t use the same TensorRT Version you get errors in Deepstream. For the conversion you have to run a script in the Docker Container which installs Tensorflow1.15.5 and other libarys and programs. You can find the script under /opt/tensorrt/python and then run . python_setup.sh. Then you can also do the conversion.

UFF into TRT
Second File: trt_engine_build_from_uff.py (1.0 KB)

After the uff Conversion you can use this script. These script transforms the uff model into a TRT Model.
My TRT Model of Mobilenetv2 with 90 classes is roughly 80Mb. (FP32)

  1. Know you need to create .config files to run the TRT Model with Deepstream5.0.

Therefore I used the .config files from the samples and source folder in Deepstream5.0. You can find them under /opt/nvidia/deepstream/deepstream-5.0/ in the Docker Container. I
I used the .configs from the /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD. You need the whole directory. These are the config files I used for my inference. You have to change the path for your project. Also you need a label.txt file.

config_infer_primary_ssd.txt (3.3 KB)
source_1080p_dec_infer_mobilenetv2_tf.txt (4.3 KB)
labels.txt (702 Bytes)

  1. Compile the Makefile in the objectDetector_SSD/nvdsinfer_custom_impl_ssd folder

The Makefile in the folder creates a file (libnvdsinfer_custom_impl_ssd.so). Know Deepstream can use the special detection parser for the SSD Model and knows what to do with “NMS” Output layer you created in the uff conversion . If you don’t have these file an the correct path to it you get also errors.

Important
I had to do changes in the Makefile otherwise I encountered also errors. For the Deepstream5.0 Container I used these Makefile.
Makefile (1.9 KB)
The important changes are:

  • CUDA_VER:=10.2
  • CFLAGS+= -I…/…/includes -I/usr/local/cuda-$(CUDA_VER)/include -I/opt/nvidia/deepstream/deepstream-5.0/sources/includes
  • LIBS:= -lnvinfer -lnvparsers -L/usr/local/cuda-$(CUDA_VER)/lib64 -lcudart -lcublas -L/opt/nvidia/deepstream/deepstream-5.0/sources/libs

Otherwise you get errors that the .cpp files can’t find the libraries (include “NvInferPlugin.h”
and include “nvdsinfer_custom_impl.h”)

  1. You need to copy the .uff Model, TRT Model, and config Models in a folder on your host which the Deepstream Container can use.

I copied the /opt/nvidia/deepstream/deepstream-5.0/deepstream folder with the /samples and /sources directory into a directory on my host which the host and the Docker Container have access too. Than you have to check the the config files for the correct path to the uff, TRT Model, config files, label.txt and the libnvdsinfer_custom_impl_ssd file.

  1. You can start the inference with “deepstream-app -c deepstream_app_config_ssd.txt”

Sources:

For the .config files I used these sources:

Changes to the Makefile: (careful for docker the paths are different!!!)


What is actual the performance for the Mobilenetv2 TRT Model?
I got for FP32 roughly 30FPS. I thought it would be faster.
Can you confirm the speed roughly @AastaLLL for Mobilenetv2?

Update
If I change inside the “source_…” file the option “sync” under the part “sink0” from 1 to 0 I get 60 FPS.

Update 2

If you trained Mobilenetv2 with Tf1.15 you can’t transform it with the UFF Parser because TensorRT can’t handle BatchNorm/FusedBatchNormV3. FusedBatchNorm was implemented in tensorflow with TF1.15. Maybe using TF1.14 can solve this problem.