Failed to convert ssd_inception_v2 from Tensorflow Object Detection to TensorRT engine

Environnement:
Linux version: Ubuntu 18.04
GPU type: Titan X
CUDA version: 10.0
Framework: TensorRT 5.1.5-1, Tensorflow-gpu 1.12

Dear all,

We use Tensorflow Object Detection API to train models and we would like to convert them to uff and then use them in TensorRT.
Thanks to the script in UffSample provided by Nvidia we can convert the Tensorflow model zoo ssd_inception_v2 model to uff and then create an engine.

The problem is that if when we retrain the same model with the COCO dataset, this new model can’t be converted to TensortRT engine anymore.
The graph in Tensorboard seems to be slightly different from the model zoo 2018 frozen graph.
We have tried different versions of the config.py (see an example bellow used for the 2018 pretrained version) and we have remained each time with the same error.
This behavior might be in relation with some part of the graph that has to be additionally transformed when using Tensorflow Object Detection API.
For the Tensorflow export part we use the script as described in the github: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/exporting_models.md

The error is as follow in our PC with the above configuration:

[libprotobuf FATAL /home/erisuser/p4sw/sw/gpgpu/MachineLearning/DIT/externals/protobuf/x86_64/10.0/include/google/protobuf/repeated_field.h:1408] CHECK failed: (index) < (current_size_):
terminate called after throwing an instance of 'google_private::protobuf::FatalException'
what(): CHECK failed: (index) < (current_size_):

We obtain another error when we execute the same script on our Jetson AGX Xavier (Linux 18.04, TensorRT 5.0.3-1, CUDA 10.0):

UFFParser: Parser error: BoxPredictor_0/Reshape: Reshape: -1 dimension specified more than 1 time

config.py used to create the uff:

#
# Copyright 1993-2019 NVIDIA Corporation. All rights reserved.
#
# NOTICE TO LICENSEE:
#
# This source code and/or documentation ("Licensed Deliverables") are
# subject to NVIDIA intellectual property rights under U.S. and
# international Copyright laws.
#
# These Licensed Deliverables contained herein is PROPRIETARY and
# CONFIDENTIAL to NVIDIA and is being provided under the terms and
# conditions of a form of NVIDIA software license agreement by and
# between NVIDIA and Licensee ("License Agreement") or electronically
# accepted by Licensee. Notwithstanding any terms or conditions to
# the contrary in the License Agreement, reproduction or disclosure
# of the Licensed Deliverables to any third party without the express
# written consent of NVIDIA is prohibited.
#
# NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE
# LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE
# SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE. IT IS
# PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND.
# NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED
# DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY,
# NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE.
# NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE
# LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY
# SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY
# DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
# WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
# ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
# OF THESE LICENSED DELIVERABLES.
#
# U.S. Government End Users. These Licensed Deliverables are a
# "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT
# 1995), consisting of "commercial computer software" and "commercial
# computer software documentation" as such terms are used in 48
# C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government
# only as a commercial end item. Consistent with 48 C.F.R.12.212 and
# 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all
# U.S. Government End Users acquire the Licensed Deliverables with
# only those rights set forth herein.
#
# Any use of the Licensed Deliverables in individual and commercial
# software must include, in the user documentation and internal
# comments to the code, the above Disclaimer and U.S. Government End
# Users Notice.
#

import graphsurgeon as gs
import tensorflow as tf

Input = gs.create_node("Input",
op="Placeholder",
dtype=tf.float32,
shape=[1, 3, 300, 300])
PriorBox = gs.create_plugin_node(name="GridAnchor", op="GridAnchor_TRT",
numLayers=6,
minSize=0.2,
maxSize=0.95,
aspectRatios=[1.0, 2.0, 0.5, 3.0, 0.33],
variance=[0.1,0.1,0.2,0.2],
featureMapShapes=[19, 10, 5, 3, 2, 1])
NMS = gs.create_plugin_node(name="NMS", op="NMS_TRT",
shareLocation=1,
varianceEncodedInTarget=0,
backgroundLabelId=0,
confidenceThreshold=1e-8,
nmsThreshold=0.6,
topK=100,
keepTopK=100,
numClasses=91,
inputOrder=[0, 2, 1],
confSigmoid=1,
isNormalized=1)
concat_priorbox = gs.create_node(name="concat_priorbox", op="ConcatV2", dtype=tf.float32, axis=2)
concat_box_loc = gs.create_plugin_node("concat_box_loc", op="FlattenConcat_TRT", dtype=tf.float32, axis=1, ignoreBatch=0)
concat_box_conf = gs.create_plugin_node("concat_box_conf", op="FlattenConcat_TRT", dtype=tf.float32, axis=1, ignoreBatch=0)

namespace_plugin_map = {
"MultipleGridAnchorGenerator": PriorBox,
"Postprocessor": NMS,
"Preprocessor": Input,
"ToFloat": Input,
"image_tensor": Input,
"image_tensor:0": Input, #added for 2018
"MultipleGridAnchorGenerator/Concatenate": concat_priorbox,
"MultipleGridAnchorGenerator/Identity": concat_priorbox,
"concat": concat_box_loc,
"concat_1": concat_box_conf
}

def preprocess(dynamic_graph):
# Now create a new graph by collapsing namespaces
dynamic_graph.collapse_namespaces(namespace_plugin_map)
# Remove the outputs, so we just have a single output node (NMS).
dynamic_graph.remove(dynamic_graph.graph_outputs, remove_exclusive_dependencies=False)
dynamic_graph.find_nodes_by_op("NMS_TRT")[0].input.remove("Input") #added for 2018

Any help would be greatly appreciated,
Thank you

Hi,

We have a sample to demonstrate the model conversion of object detection API:
[url]https://github.com/AastaNV/TRT_object_detection[/url]

There is also a configure file for ssd_inception_v2 model:
[url]https://github.com/AastaNV/TRT_object_detection/blob/master/config/model_ssd_inception_v2_coco_2017_11_17.py[/url]

Thanks.

Thank you for your fast reply.

Concerning the ssd_inception_v2_coco_2017_11_17 pretrained model, we have successfully reproduced your results: the conversion works properly.
In fact the problem arises as soon as we try to retrain the same model, even on the same COCO dataset using TensorFlow Object Detection API, thus creating a new frozen graph architecture.
We have not been able so far to overcome the changes TensorFlow Object Detection API adds to the graph, preventing the correct conversion to UFF and TensorRT engine.
We have tried to export the TensorFlow model with different options, but without success until now.

Note: We managed to convert the ssd_inception_v2_coco_2018_01_28 ([url]http://download.tensorflow.org/models/object_detection/ssd_inception_v2_coco_2018_01_28.tar.gz[/url]) model on our PC environment (not currently working on Jetson AGX, probably due to TensorRT version difference).
In that case we have made some modifications in the GraphSurgeon script to be compatible.
The attached config.py in the last comment is the one we used for this conversion.

We use this command to convert our frozen models to uff:
convert-to-uff --input-file frozen_inference_graph.pb -O NMS -p config.py