ERROR: UFFParser: Validator error: NMS

Hello. I’m trying to run ssd_inception_v2_coco_2017_11_17.tar.gz with TensrRT.
According to instruction, UFF file is generated. But sample_uff_ssd does not work. Can I solve it?

$ convert-to-uff tensorflow --input-file ssd_inception_v2_coco_2017_11_17/frozen_inference_graph.pb -O NMS -p config.py

Converting as custom op NMS_TRT NMS
name: "NMS"
op: "NMS_TRT"
input: "concat_box_loc"
input: "concat_priorbox"
input: "concat_box_conf"
attr {
  key: "backgroundLabelId_u_int"
  value {
    i: 0
  }
}
attr {
  key: "confSigmoid_u_int"
  value {
    i: 1
  }
}
attr {
  key: "confidenceThreshold_u_float"
  value {
    f: 9.99999993922529e-09
  }
}
attr {
  key: "inputOrder_u_ilist"
  value {
    list {
      i: 0
      i: 2
      i: 1
    }
  }
}
attr {
  key: "isNormalized_u_int"
  value {
    i: 1
  }
}
attr {
  key: "keepTopK_u_int"
  value {
    i: 100
  }
}
attr {
  key: "nmsThreshold_u_float"
  value {
    f: 0.6000000238418579
  }
}
attr {
  key: "numClasses_u_int"
  value {
    i: 91
  }
}
attr {
  key: "scoreConverter_u_str"
  value {
    s: "SIGMOID"
  }
}
attr {
  key: "shareLocation_u_int"
  value {
    i: 1
  }
}
attr {
  key: "topK_u_int"
  value {
    i: 100
  }
}
attr {
  key: "varianceEncodedInTarget_u_int"
  value {
    i: 0
  }
}

Warning: No conversion function registered for layer: FlattenConcat_TRT yet.
Converting as custom op FlattenConcat_TRT concat_box_conf
name: "concat_box_conf"
op: "FlattenConcat_TRT"
input: "BoxPredictor_0/Reshape_1"
input: "BoxPredictor_1/Reshape_1"
input: "BoxPredictor_2/Reshape_1"
input: "BoxPredictor_3/Reshape_1"
input: "BoxPredictor_4/Reshape_1"
input: "BoxPredictor_5/Reshape_1"
attr {
  key: "axis_u_int"
  value {
    i: 1
  }
}
...

And sample_uff_ssd output ERRORs.

$ ../../bin/sample_uff_ssd
../../data/ssd/sample_ssd.uff
Begin parsing model...
ERROR: UFFParser: Validator error: NMS: Unsupported operation _NMS_TRT
ERROR: sample_uff_ssd: Fail to parse
sample_uff_ssd: sampleUffSSD.cpp:667: int main(int, char**): Assertion `tmpEngine != nullptr' failed.
Aborted (core dumped)

TensorRT version is 4.0.1.6-1+cuda9.0
uff (0.4.0)
graphsurgeon (0.2.0)

Hello, can you provide details on the platforms you are using?

Linux distro and version
GPU type
nvidia driver version
CUDA version
CUDNN version
Python version [if using python]
Tensorflow version
TensorRT version

are you referencing the ssd_inception_v2_coco_2017_11_17 from http://download.tensorflow.org/models/object_detection/ssd_inception_v2_coco_2017_11_17.tar.gz?

I checked on JetsonTX2 and GTX machine.

Ubuntu: 16.04
GPU: TX2 or GTX1080
CUDA: 9.0
CUDNN: 7
Python version: 3.5
Tensorflow: 1.10
TensorRT 4.0.2

The referencing is the same as you mentioned.

I made the reproduction code written in Python. I’ll share it.

converting and parsing.

import uff
import tensorflow as tf
import tensorrt as trt
from tensorrt.parsers import uffparser
import config

frozen_file = 'ssd_inception_v2_coco_2017_11_17/frozen_inference_graph.pb'
quiet = False
list_nodes = False
text = False

uff_model = uff.from_tensorflow_frozen_model(frozen_file, output_nodes=['NMS'], preprocessor='config.py', quiet=quiet, list_nodes=list_nodes, text=text)

G_LOGGER = trt.infer.ConsoleLogger(trt.infer.LogSeverity.ERROR)

parser = uffparser.create_uff_parser()
parser.register_input("Placeholder", (3,300,300), 0)
parser.register_output("MarkOutput_0")

engine = trt.utils.uff_to_trt_engine(G_LOGGER, uff_model, parser, 1, 1 << 20)

config.py

import graphsurgeon as gs
import tensorflow as tf

Input = gs.create_node("Input",
    op="Placeholder",
    dtype=tf.float32,
    shape=[1, 3, 300, 300])
PriorBox = gs.create_node(name="GridAnchor", op="GridAnchor_TRT",
    numLayers=6,
    minSize=0.2,
    maxSize=0.95,
    aspectRatios=[1.0, 2.0, 0.5, 3.0, 0.33],
    variance=[0.1,0.1,0.2,0.2],
    featureMapShapes=[19, 10, 5, 3, 2, 1])
NMS = gs.create_node(name="NMS", op="NMS_TRT",
    shareLocation=1,
    varianceEncodedInTarget=0,
    backgroundLabelId=0,
    confidenceThreshold=1e-8,
    nmsThreshold=0.6,
    topK=100,
    keepTopK=100,
    numClasses=91,
    inputOrder=[0, 2, 1],
    confSigmoid=1,
    isNormalized=1,
    scoreConverter="SIGMOID")
concat_priorbox = gs.create_node(name="concat_priorbox", op="Concat_TRT", dtype=tf.float32, axis=2, ignoreBatch=1)
concat_box_loc = gs.create_node("concat_box_loc", op="FlattenConcat_TRT", dtype=tf.float32, axis=1, ignoreBatch=0)
concat_box_conf = gs.create_node("concat_box_conf", op="FlattenConcat_TRT", dtype=tf.float32, axis=1, ignoreBatch=0)

namespace_plugin_map = {
    "MultipleGridAnchorGenerator": PriorBox,
    "Postprocessor": NMS,
    "Preprocessor": Input,
    "ToFloat": Input,
    "image_tensor": Input,
    "MultipleGridAnchorGenerator/Concatenate": concat_priorbox,
    "concat": concat_box_loc,
    "concat_1": concat_box_conf
}

def preprocess(dynamic_graph):
    # Now create a new graph by collapsing namespaces
    dynamic_graph.collapse_namespaces(namespace_plugin_map)
    # Remove the outputs, so we just have a single output node (NMS).
    dynamic_graph.remove(dynamic_graph.graph_outputs, remove_exclusive_dependencies=False)

Error is

[TensorRT] ERROR: UFFParser: Validator error: NMS: Unsupported operation _NMS_TRT
[TensorRT] ERROR: Failed to parse UFF model stream
  File "/opt/conda/lib/python3.5/site-packages/tensorrt/utils/_utils.py", line 255, in uff_to_trt_engine
    assert(parser.parse(stream, network, model_datatype))
Traceback (most recent call last):
  File "/opt/conda/lib/python3.5/site-packages/tensorrt/utils/_utils.py", line 255, in uff_to_trt_engine
    assert(parser.parse(stream, network, model_datatype))
AssertionError

I modify NMS_TRT to NMS in config.py and the error is soleved.
But new error occurs.

ERROR: UFFParser: Validator error: concat_box_loc: Unsupported operation _FlattenConcat
ERROR: sample_uff_ssd: Fail to parse

It works!
I’ll share the way to solve later.

Hi keisuke,

I have the same problem with this issue.
can you share your solution?
Thank you very much.

Similar problem here, how have you solved it ?

TensorRT v5

Hello.This problem because of TensorRT version.
In TRT 4.x. The plugin op ‘NMS’ is defined ‘NMS’.
In TRT 5.x. The plugin op ‘NMS’ is defined ‘NMS_TRT’.
So operation ‘_FlattenConcat’ has different name in different TRT version.
So please make sure your config.py match your TRT version and it would be found in your /sample/sampleUffSSD/ . In that path you can also find README file , and do what it said.
I remember TRT 5.x will give you model sd_inception_v2_coco_2017_11_17.tar.gz.But TRT 4.x is not.
The different model maybe have diffrent config.py to process.

Hi keisuke,

I have the same problem with you.
Can you share your solution?
Thank you very much!

Hi houhongyi,keisuke.fujimoto,

I’m getting the below error while creating the engine from the uff.
TRT Version : 5.1.2.2

[TensorRT] ERROR: UffParser: Validator error: concat_box_loc: Unsupported operation _FlattenConcat_TRT

Could you share your feedback .