SSD InceptionV2 TensorRT Error

Hi team,

I am trying to convert ssd-inceptionV2(six class) model using TensorRT.I could convert the .pb to Uff, and while trying to create the engine getting the below error.

[TensorRT] ERROR: UffParser: Validator error: concat_box_loc: Unsupported operation _FlattenConcat_TRT

Linux distro and version : Ubuntu 16.04
GPU type : 1070
nvidia driver version
CUDA version : 9.0
CUDNN version: 7.5.0
Python version [if using python] : 3.5
Tensorflow version : 1.13
TensorRT version :5.1.2.2

below is my config.py

import graphsurgeon as gs
import tensorflow as tf

Input = gs.create_node(“Input”,
op=“Placeholder”,
dtype=tf.float32,
shape=[1, 3, 300, 300])
PriorBox = gs.create_plugin_node(name=“GridAnchor”, op=“GridAnchor_TRT”,
numLayers=6,
minSize=0.2,
maxSize=0.95,
aspectRatios=[1.0, 2.0, 0.5, 3.0, 0.33],
variance=[0.1,0.1,0.2,0.2],
featureMapShapes=[19, 10, 5, 3, 2, 1])
#featureMapShapes=[38, 19, 10, 5, 3, 2])
NMS = gs.create_plugin_node(name=“NMS”, op=“NMS_TRT”,
shareLocation=1,
varianceEncodedInTarget=0,
backgroundLabelId=0,
confidenceThreshold=1e-8,
nmsThreshold=0.6,
topK=100,
keepTopK=100,
numClasses=7,
inputOrder=[0, 2, 1],
confSigmoid=1,
isNormalized=1,
scoreConverter=“SIGMOID”)

concat_priorbox = gs.create_node(name=“concat_priorbox”, op=“ConcatV2”, dtype=tf.float32, axis=2)
concat_box_loc = gs.create_node(“concat_box_loc”, op=“FlattenConcat_TRT”, dtype=tf.float32, axis=1, ignoreBatch=0)
concat_box_conf = gs.create_node(“concat_box_conf”, op=“FlattenConcat_TRT”, dtype=tf.float32, axis=1, ignoreBatch=0)

namespace_plugin_map = {
“MultipleGridAnchorGenerator”: PriorBox,
“Postprocessor”: NMS,
“Preprocessor”: Input,
# “ToFloat”: Input,
# “image_tensor”: Input,
#“MultipleGridAnchorGenerator/Concatenate”: concat_priorbox,
“Concatenate/concat”: concat_priorbox,
“concat”: concat_box_loc,
“concat_1”: concat_box_conf,
}

namespace_remove = {
“ToFloat”,
“image_tensor”,
“Preprocessor/map/TensorArrayStack_1/TensorArrayGatherV3”,
}

def preprocess(dynamic_graph):
print(’>>>>>>>>>>>> Inside preprocess ')
# remove the unrelated or error layers
dynamic_graph.remove(dynamic_graph.find_nodes_by_path(namespace_remove), remove_exclusive_dependencies=False)

# Now create a new graph by collapsing namespaces
dynamic_graph.collapse_namespaces(namespace_plugin_map)
# Remove the outputs, so we just have a single output node (NMS).
dynamic_graph.remove(dynamic_graph.graph_outputs, remove_exclusive_dependencies=False)

# Remove the Squeeze to avoid "Assertion `isPlugin(layerName)' failed"
Squeeze = dynamic_graph.find_node_inputs_by_name(dynamic_graph.graph_outputs[0], 'Squeeze')
dynamic_graph.forward_inputs(Squeeze)

Attaching all nodes/operation during the conversion to UFF. Not seeing any operation by _FlattenConcat_TRT. However getting error for the same while creating the trt engine.

Please find the list of nodes/opeartion in the below path( convert_trt_ssd_6_class.txt)

frozen_inference_graph_inception_6_class.pbtxt has _FlattenConcat_TRT. NOT sure how the _ is getting appended.

https://filebin.net/n6xw204kxp7u4ozy

It works after setting the path of the plugin .