sampleUffSSD with custom ssd_mobilenet_v1 model

Have you solved your problem?
I have the error like yours before, but I change the config.py and get it.

I could not resolve. Can you share what did you change in config.py ?

can you share your config.py,I have the same problem with @vinaybk,need your help

Hi,
I solved the problem by modifying the config as below:

import graphsurgeon as gs
import tensorflow as tf

Input = gs.create_node("Input",
    op="Placeholder",
    dtype=tf.float32,
    shape=[1, 3, 300, 300])
PriorBox = gs.create_plugin_node(name="GridAnchor", op="GridAnchor_TRT",
    numLayers=6,
    minSize=0.2,
    maxSize=0.95,
    aspectRatios=[1.0, 2.0, 0.5, 3.0, 0.33],
    variance=[0.1,0.1,0.2,0.2],
    featureMapShapes=[19, 10, 5, 3, 2, 1])
NMS = gs.create_plugin_node(name="NMS", op="NMS_TRT",
    shareLocation=1,
    varianceEncodedInTarget=0,
    backgroundLabelId=0,
    confidenceThreshold=1e-8,
    nmsThreshold=0.6,
    topK=100,
    keepTopK=100,
    numClasses=91,
    inputOrder=[0, 2, 1],
    confSigmoid=1,
    isNormalized=1)
concat_priorbox = gs.create_node(name="concat_priorbox", op="ConcatV2", dtype=tf.float32, axis=2)
concat_box_loc = gs.create_plugin_node("concat_box_loc", op="FlattenConcat_TRT", dtype=tf.float32, axis=1, ignoreBatch=0)
concat_box_conf = gs.create_plugin_node("concat_box_conf", op="FlattenConcat_TRT", dtype=tf.float32, axis=1, ignoreBatch=0)

namespace_plugin_map = {
    "MultipleGridAnchorGenerator": PriorBox,
    "Postprocessor": NMS,
    "Preprocessor": Input,
    "ToFloat": Input,
    "image_tensor:0": Input,
    "MultipleGridAnchorGenerator/Concatenate": concat_priorbox,
    "MultipleGridAnchorGenerator/Identity": concat_priorbox,
    "concat": concat_box_loc,
    "concat_1": concat_box_conf
}

namespace_remove = {
    "ToFloat",
    "image_tensor:0",
    "Preprocessor/map/TensorArrayStack_1/TensorArrayGatherV3"
}

def preprocess(dynamic_graph):
    # remove the unrelated or error layers
    dynamic_graph.remove(dynamic_graph.find_nodes_by_path(namespace_remove), remove_exclusive_dependencies=False)
    # Now create a new graph by collapsing namespaces
    dynamic_graph.collapse_namespaces(namespace_plugin_map)
    # Remove the outputs, so we just have a single output node (NMS).
    dynamic_graph.remove(dynamic_graph.graph_outputs, remove_exclusive_dependencies=False)

Use the above config.py and run the converter as below :

convert-to-uff --input-file ../ssd_mobilenet_v1_coco_2018_01_28/frozen_inference_graph.pb -O NMS  -p config.py

One thing helped me was to debug by using -l option at the end of above command , helps you to list all nodes mapped properly or not. such as the below :

The last four nodes should look like :

346 Add: “add”
347 Placeholder: “Input”
348 GridAnchor_TRT: “GridAnchor”
349 FlattenConcat_TRT: “concat_box_conf”
350 FlattenConcat_TRT: “concat_box_loc”
351 ConcatV2: “concat_priorbox”
352 NMS_TRT: “NMS”

When you run with command, the output should clearly tell you that it picked “NMS”.

Using output node NMS
Converting to UFF graph
Warning: No conversion function registered for layer: NMS_TRT yet.
Converting NMS as custom op: NMS_TRT
Warning: No conversion function registered for layer: FlattenConcat_TRT yet.
Converting concat_box_conf as custom op: FlattenConcat_TRT
Warning: No conversion function registered for layer: GridAnchor_TRT yet.
Converting GridAnchor as custom op: GridAnchor_TRT
Warning: No conversion function registered for layer: FlattenConcat_TRT yet.
Converting concat_box_loc as custom op: FlattenConcat_TRT
No. nodes: 350

If the output is not picked properly then the converter tool will auto deduce output node of “add”. This you can confirm when you run your sample code and feed output as “add”. But this will not give you the SSD outputs.The NMS node will give you the SSD outputs, hence ensure that you see your directed output node is visible at the end of conversion.

Once this Uff is ready, you can run through sample code and you can see classification and detections happening.

Hope this helps.

Hello,

I’m trying to convert one of the ssd_mobilenet_v1 from the tensorflow model zoo.
I can actually converter the frozen model to .uff but the problem I get when running the sample is the next:
[TRT] UffParser: Validator error: FeatureExtractor/MobilenetV1/zeros_4: Unsupported operation _Fill

As far as I understand that means that operation is not implemented, but I can see people is able to run mobilenet V1, so I don’t see why this would fail.

Any hint?

Hi esofabian,
I am confused, whether you are running ssd_mobilenet or mobilenet. SSD-mobilenet requires preprocessing using config.py that was mentioned earlier, as there are 4 outputs need to be merged in to NMS node. Whereas Mobilenet is classifier whose output would be only one , so this does not need help of config.py.
You can debug the converter with help of -l, which will tell you nodes mapping, so from that you can try tracing where is this zeros_4 coming from.

Hi,

Thanks for your comments on the tensorrt based model optimisation. I’m also trying to do the same for ssd-inceptionv2 for custom trained model(six classes). However I’m getting errors like below while creating the tesnsorrt engine from the UFF(I could create the UFF successfully). Could you please share your thought or approach to handle such errors.

[TensorRT] ERROR: UffParser: Validator error: concat_box_loc: Unsupported operation _FlattenConcat_TRT

TensorRT Version : 5.1.2.2
My config.py is as below :

import graphsurgeon as gs
import tensorflow as tf

Input = gs.create_node(“Input”,
op=“Placeholder”,
dtype=tf.float32,
shape=[1, 3, 300, 300])
PriorBox = gs.create_plugin_node(name=“GridAnchor”, op=“GridAnchor_TRT”,
numLayers=6,
minSize=0.2,
maxSize=0.95,
aspectRatios=[1.0, 2.0, 0.5, 3.0, 0.33],
variance=[0.1,0.1,0.2,0.2],
featureMapShapes=[19, 10, 5, 3, 2, 1])
#featureMapShapes=[38, 19, 10, 5, 3, 2])
NMS = gs.create_plugin_node(name=“NMS”, op=“NMS_TRT”,
shareLocation=1,
varianceEncodedInTarget=0,
backgroundLabelId=0,
confidenceThreshold=1e-8,
nmsThreshold=0.6,
topK=100,
keepTopK=100,
numClasses=7,
inputOrder=[0, 2, 1],
confSigmoid=1,
isNormalized=1,
scoreConverter=“SIGMOID”)

concat_priorbox = gs.create_node(name=“concat_priorbox”, op=“ConcatV2”, dtype=tf.float32, axis=2)
concat_box_loc = gs.create_node(“concat_box_loc”, op=“FlattenConcat_TRT”, dtype=tf.float32, axis=1, ignoreBatch=0)
concat_box_conf = gs.create_node(“concat_box_conf”, op=“FlattenConcat_TRT”, dtype=tf.float32, axis=1, ignoreBatch=0)

namespace_plugin_map = {
“MultipleGridAnchorGenerator”: PriorBox,
“Postprocessor”: NMS,
“Preprocessor”: Input,
# “ToFloat”: Input,
# “image_tensor”: Input,
#“MultipleGridAnchorGenerator/Concatenate”: concat_priorbox,
“Concatenate/concat”: concat_priorbox,
“concat”: concat_box_loc,
“concat_1”: concat_box_conf,
}

namespace_remove = {
“ToFloat”,
“image_tensor”,
“Preprocessor/map/TensorArrayStack_1/TensorArrayGatherV3”,
}

def preprocess(dynamic_graph):
print('>>>>>>>>>>>> Inside preprocess ')
# remove the unrelated or error layers
dynamic_graph.remove(dynamic_graph.find_nodes_by_path(namespace_remove), remove_exclusive_dependencies=False)

# Now create a new graph by collapsing namespaces
dynamic_graph.collapse_namespaces(namespace_plugin_map)
# Remove the outputs, so we just have a single output node (NMS).
dynamic_graph.remove(dynamic_graph.graph_outputs, remove_exclusive_dependencies=False)

# Remove the Squeeze to avoid "Assertion `isPlugin(layerName)' failed"
Squeeze = dynamic_graph.find_node_inputs_by_name(dynamic_graph.graph_outputs[0], 'Squeeze')
dynamic_graph.forward_inputs(Squeeze)

Hi Ashispapu,
Can you check your Uff conversion with -l command, so that you see all nodes are properly mapped and converted. You also check whats happening with node “_FlattenConcat_TRT”, which TensorRT is not able to understand.
Also cross check once with tensorboard for the nodes which are getting in to trouble.

Regards
Vinay

Hi vinaybk,

Thanks for your feedback. I checked Uff conversion with -l command and could see the below mapping.

5 Const: “add/y”
1196 Add: “add”
1197 Placeholder: “Input”
1198 FlattenConcat_TRT: “concat_box_conf”
1199 ConcatV2: “concat_priorbox”
1200 FlattenConcat_TRT: “concat_box_loc”
1201 GridAnchor_TRT: “GridAnchor”
1202 NMS_TRT: “NMS”

While I’m trying to parse the UFF to create the engine, getting the below error.

[TensorRT] ERROR: UffParser: Validator error: concat_box_loc: Unsupported operation FlattenConcat_TRT (Not sure how the '’ is getting appended to the operation)

Below snippet is from my config.py

NMS = gs.create_plugin_node(name=“NMS”, op=“NMS_TRT”,
shareLocation=1,
varianceEncodedInTarget=0,
backgroundLabelId=0,
confidenceThreshold=1e-8,
nmsThreshold=0.6,
topK=100,
keepTopK=100,
numClasses=7,
inputOrder=[0, 2, 1],
confSigmoid=1,
isNormalized=1)
concat_priorbox = gs.create_node(name=“concat_priorbox”, op=“ConcatV2”, dtype=tf.float32, axis=2)
concat_box_loc = gs.create_plugin_node(“concat_box_loc”, op=“FlattenConcat_TRT”, dtype=tf.float32, axis=1, ignoreBatch=0)
concat_box_conf = gs.create_plugin_node(“concat_box_conf”, op=“FlattenConcat_TRT”, dtype=tf.float32, axis=1, ignoreBatch=0)

TensorRT Version : 5.1.2.2
TF Version: 1.13.1

Thanks for our help.

Can you please try accessing one layer above: i.e : ConcatV2: “concat_priorbox”

Sorry, I did not understand clearly. Do you mean to set as Output node ?

In TensorRT, Instead of NMS_TRT, try accessing the one layer above.

Hi Vinay,

I tried to access layers above NMS_TRT like Add and Squeeze. However still getting the same error.

[TensorRT] ERROR: UffParser: Validator error: concat_box_loc: Unsupported operation _FlattenConcat_TRT.

Then your UFF conversion is wrong. I expected the layer above should be able to access.
Can you check in your Uff conversion, where is this _FlattenConcat_TRT.

I can see only FlattenConcat_TRT: “concat_box_conf” . Did not find any FlattenConcat_TRT. Not sure how the '’ is getting appended.

1199 FlattenConcat_TRT: “concat_box_loc”
1200 NMS_TRT: “NMS”
1201 ConcatV2: “concat_priorbox”
1202 FlattenConcat_TRT: “concat_box_conf”

Attaching all nodes/operation during the conversion to UFF. Not seeing any operation by _FlattenConcat_TRT. However getting error for the same while creating the trt engine.

Please find the list of nodes/opeartion in the below path( convert_trt_ssd_6_class.txt)

Hi @vinaybk,
Did you work with ssd inception v2 model??

I am also working on same thing.
My sampleUFfSSD.cpp is not hrowing any error but no detectons are happening.

Could you provide me your config file for uff conversion and any other changes you made to make it happen.