TensorRT and Tensorflow: convert to uff failed

Hello. I’m trying to convert TF frozen default mobilenet to uff format with uff.from_tensorflow_frozen_model method.

But I’m getting error:

---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-5-c469da85ed95> in <module>()
----> 1 create_and_save_inf_engine()

<ipython-input-4-cfa7ab8dcec7> in create_and_save_inf_engine()
     12     height = 300
     13 
---> 14     uff_model = uff.from_tensorflow_frozen_model('/home/undead/reps/tf_models/object_detection/ckpt/mobilenet_v2.pb', output_layers)
     15 #     parser = uffparser.create_uff_parser()
     16 #     parser.register_input(input_layers[0], (channels, width, height), 0)

/usr/local/lib/python3.5/dist-packages/uff/converters/tensorflow/conversion_helpers.py in from_tensorflow_frozen_model(frozen_file, output_nodes, **kwargs)
    101     graphdef.ParseFromString(open(frozen_file, "rb").read())
    102 
--> 103     return from_tensorflow(graphdef, output_nodes, **kwargs)

/usr/local/lib/python3.5/dist-packages/uff/converters/tensorflow/conversion_helpers.py in from_tensorflow(graphdef, output_nodes, **kwargs)
     73         output_nodes=output_nodes,
     74         input_replacements=input_replacements,
---> 75         name="main")
     76 
     77     uff_metagraph_proto = uff_metagraph.to_uff()

/usr/local/lib/python3.5/dist-packages/uff/converters/tensorflow/converter.py in convert_tf2uff_graph(cls, tf_graphdef, uff_metagraph, output_nodes, input_replacements, name)
     62         while len(nodes_to_convert):
     63             nodes_to_convert += cls.convert_tf2uff_node(nodes_to_convert.pop(), tf_nodes,
---> 64                                                         uff_graph, input_replacements)
     65         for output in output_nodes:
     66             uff_graph.mark_output(output)

/usr/local/lib/python3.5/dist-packages/uff/converters/tensorflow/converter.py in convert_tf2uff_node(cls, name, tf_nodes, uff_graph, input_replacements)
     44         for i, inp in enumerate(inputs):
     45             inp_name, num = cls.split_node_name_and_output(inp);
---> 46             inp_node = tf_nodes[inp_name]
     47             if inp_node.op == 'Identity':
     48                 inputs[i] = inp_node.input[0]

KeyError: '^FeatureExtractor/Assert/Assert'

It’s very unclear what is happened. Can you help me?
It’s is difficult to use TensorRT when it is impossible to convert even a simple model.
frozen_inference_graph.pb.zip (24.4 MB)

More precisely, first I got that error:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-3-c469da85ed95> in <module>()
----> 1 create_and_save_inf_engine()

<ipython-input-2-cfa7ab8dcec7> in create_and_save_inf_engine()
     12     height = 300
     13 
---> 14     uff_model = uff.from_tensorflow_frozen_model('/home/undead/reps/tf_models/object_detection/ckpt/inception_300.pb', output_layers)
     15 #     parser = uffparser.create_uff_parser()
     16 #     parser.register_input(input_layers[0], (channels, width, height), 0)

/usr/local/lib/python3.5/dist-packages/uff/converters/tensorflow/conversion_helpers.py in from_tensorflow_frozen_model(frozen_file, output_nodes, **kwargs)
    101     graphdef.ParseFromString(open(frozen_file, "rb").read())
    102 
--> 103     return from_tensorflow(graphdef, output_nodes, **kwargs)

/usr/local/lib/python3.5/dist-packages/uff/converters/tensorflow/conversion_helpers.py in from_tensorflow(graphdef, output_nodes, **kwargs)
     73         output_nodes=output_nodes,
     74         input_replacements=input_replacements,
---> 75         name="main")
     76 
     77     uff_metagraph_proto = uff_metagraph.to_uff()

/usr/local/lib/python3.5/dist-packages/uff/converters/tensorflow/converter.py in convert_tf2uff_graph(cls, tf_graphdef, uff_metagraph, output_nodes, input_replacements, name)
     62         while len(nodes_to_convert):
     63             nodes_to_convert += cls.convert_tf2uff_node(nodes_to_convert.pop(), tf_nodes,
---> 64                                                         uff_graph, input_replacements)
     65         for output in output_nodes:
     66             uff_graph.mark_output(output)

/usr/local/lib/python3.5/dist-packages/uff/converters/tensorflow/converter.py in convert_tf2uff_node(cls, name, tf_nodes, uff_graph, input_replacements)
     49         op = tf_node.op
     50         uff_node = cls.convert_layer(
---> 51             op, name, tf_node, inputs, uff_graph, tf_nodes=tf_nodes)
     52         return uff_node
     53 

/usr/local/lib/python3.5/dist-packages/uff/converters/tensorflow/converter.py in convert_layer(cls, op, name, tf_node, inputs, uff_graph, **kwargs)
     26             print("Converting as custom op", op, name)
     27             print(tf_node)
---> 28             fields = cls.parse_tf_attrs(tf_node.attr)
     29             uff_graph.custom_node(op, inputs, name, fields)
     30             return [cls.split_node_name_and_output(inp)[0] for inp in inputs]

/usr/local/lib/python3.5/dist-packages/uff/converters/tensorflow/converter.py in parse_tf_attrs(cls, attrs)
    175     def parse_tf_attrs(cls, attrs):
    176         return {key: cls.parse_tf_attr_value(val)
--> 177                 for key, val in attrs.items()}
    178 
    179     @classmethod

/usr/local/lib/python3.5/dist-packages/uff/converters/tensorflow/converter.py in <dictcomp>(.0)
    175     def parse_tf_attrs(cls, attrs):
    176         return {key: cls.parse_tf_attr_value(val)
--> 177                 for key, val in attrs.items()}
    178 
    179     @classmethod

/usr/local/lib/python3.5/dist-packages/uff/converters/tensorflow/converter.py in parse_tf_attr_value(cls, val)
    170     def parse_tf_attr_value(cls, val):
    171         code = val.WhichOneof('value')
--> 172         return cls.convert_tf2uff_field(code, val)
    173 
    174     @classmethod

/usr/local/lib/python3.5/dist-packages/uff/converters/tensorflow/converter.py in convert_tf2uff_field(cls, code, val)
    159         elif code == 'shape':
    160             shp = val.dim
--> 161             if shp.unknown_rank:
    162                 raise ValueError(
    163                     "Unsupported: shape attribute with unknown rank")

AttributeError: 'RepeatedCompositeFieldContainer' object has no attribute 'unknown_rank'

but I commented out lines 161-163. Then I got error from first post.

Hi,

Could you share your .pb file for us debugging?
Thanks.

Hi, thank you for your answer.
I’ve attached file to first post. It is default SSD Mobilenet trained on COCO in Tensorflow Object Detection API.

In that default model error is a little different:

^Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField/Assert/Assert

but meaning is the same.

I tried to remove all nodes with asserts from graph, but that was not successful.

Hi,

Sorry for the late.
Could you also share the expected output layers name?

Thanks.

Hi. Thank you for response.
Input node is ‘image_tensor’.
Output nodes are ‘detection_boxes’, ‘detection_scores’, ‘detection_classes’, ‘num_detections’.

Hi,

Thanks for your information.

There are lots of non-supported layer in your model, ex, Identity, Cast, Gather…
That’s why the error occurs.

For supported layer information, please check our User Guide:
/usr/share/doc/tensorrt/TensorRT-3-User-Guide.pdf.gz

By the way, a user has successfully transferred mobilenet into TensorRT; maybe you can get more information from him:
https://devtalk.nvidia.com/default/topic/1025870/depthwise-convolution-is-very-slow-using-tensorrt3-0/#5217490

Thanks.

Ok, thanks. I will try to dig further.

@AastaLLL . Is there a way to convert unsupported layers into UFF model? for example, through customized layer

Hi,

Depends on if it is a weight-contained layer.

If not, you can just remove it and add it back via plugin API.
If it contains weight, it can’t be transferred into UFF format since no available converter.

Thank.

Hello everyone,

I am getting an error while trying to convert my Tensorflow .pb file to .uff format using convert-to-uff.py.

Here is the error i am currently getting.

Loading /home/maleshinloye/Frozen_models/vgg_16_classifier.pb
Using output node vgg_16/fc8/squeezed
Converting to UFF graph
Warning: No conversion function registered for layer: Squeeze yet.
Converting as custom op Squeeze vgg_16/fc8/squeezed
name: "vgg_16/fc8/squeezed"
op: "Squeeze"
input: "vgg_16/fc8/BiasAdd"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}
attr {
  key: "squeeze_dims"
  value {
    list {
      i: 1
      i: 2
    }
  }
}

Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/uff/bin/convert_to_uff.py", line 109, in <module>
    main()
  File "/usr/local/lib/python2.7/dist-packages/uff/bin/convert_to_uff.py", line 104, in main
    output_filename=args.output
  File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/conversion_helpers.py", line 103, in from_tensorflow_frozen_model
    return from_tensorflow(graphdef, output_nodes, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/conversion_helpers.py", line 75, in from_tensorflow
    name="main")
  File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py", line 64, in convert_tf2uff_graph
    uff_graph, input_replacements)
  File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py", line 51, in convert_tf2uff_node
    op, name, tf_node, inputs, uff_graph, tf_nodes=tf_nodes)
  File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py", line 28, in convert_layer
    fields = cls.parse_tf_attrs(tf_node.attr)
  File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py", line 177, in parse_tf_attrs
    for key, val in attrs.items()}
  File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py", line 177, in <dictcomp>
    for key, val in attrs.items()}
  File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py", line 172, in parse_tf_attr_value
    return cls.convert_tf2uff_field(code, val)
  File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py", line 158, in convert_tf2uff_field
    return uff.List(uff_code, [cls.convert_tf2uff_field(code, v) for v in field_value])
TypeError: 'ListValue' object is not iterable

Process finished with exit code 1

I understand that conversion errors could be caused by unsupported layers but my error seems to be different from those cases so i am unsure what is causing this. I wanted to attach my frozen model to this post but it seems to be too big. Please let me know if it is required. Any help will be appreciated. Thanks!!

Hi,

This error is from non-supported Squeeze layer.
Thanks.

@AastaLLL Can you teach me how to remove non-support layers?
I only have .pb file. I don’t have source code.

Using output node output
Converting to UFF graph
Warning: No conversion function registered for layer: Identity yet.
Converting as custom op Identity output
name: "output"
op: "Identity"
input: "BiasAdd_22"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}

Warning: No conversion function registered for layer: RealDiv yet.
Converting as custom op RealDiv truediv_21
name: "truediv_21"
op: "RealDiv"
input: "sub_21"
input: "truediv_21/y"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}

Warning: No conversion function registered for layer: RealDiv yet.
Converting as custom op RealDiv truediv_19
name: "truediv_19"
op: "RealDiv"
input: "sub_19"
input: "truediv_19/y"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}

Warning: No conversion function registered for layer: RealDiv yet.
Converting as custom op RealDiv truediv_18
name: "truediv_18"
op: "RealDiv"
input: "sub_18"
input: "truediv_18/y"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}

Warning: No conversion function registered for layer: RealDiv yet.
Converting as custom op RealDiv truediv_17
name: "truediv_17"
op: "RealDiv"
input: "sub_17"
input: "truediv_17/y"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}

Warning: No conversion function registered for layer: RealDiv yet.
Converting as custom op RealDiv truediv_16
name: "truediv_16"
op: "RealDiv"
input: "sub_16"
input: "truediv_16/y"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}

Warning: No conversion function registered for layer: RealDiv yet.
Converting as custom op RealDiv truediv_15
name: "truediv_15"
op: "RealDiv"
input: "sub_15"
input: "truediv_15/y"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}

Warning: No conversion function registered for layer: RealDiv yet.
Converting as custom op RealDiv truediv_14
name: "truediv_14"
op: "RealDiv"
input: "sub_14"
input: "truediv_14/y"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}

Warning: No conversion function registered for layer: RealDiv yet.
Converting as custom op RealDiv truediv_13
name: "truediv_13"
op: "RealDiv"
input: "sub_13"
input: "truediv_13/y"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}

Warning: No conversion function registered for layer: RealDiv yet.
Converting as custom op RealDiv truediv_12
name: "truediv_12"
op: "RealDiv"
input: "sub_12"
input: "truediv_12/y"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}

Warning: No conversion function registered for layer: RealDiv yet.
Converting as custom op RealDiv truediv_11
name: "truediv_11"
op: "RealDiv"
input: "sub_11"
input: "truediv_11/y"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}

Warning: No conversion function registered for layer: RealDiv yet.
Converting as custom op RealDiv truediv_10
name: "truediv_10"
op: "RealDiv"
input: "sub_10"
input: "truediv_10/y"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}

Warning: No conversion function registered for layer: RealDiv yet.
Converting as custom op RealDiv truediv_9
name: "truediv_9"
op: "RealDiv"
input: "sub_9"
input: "truediv_9/y"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}

Warning: No conversion function registered for layer: RealDiv yet.
Converting as custom op RealDiv truediv_8
name: "truediv_8"
op: "RealDiv"
input: "sub_8"
input: "truediv_8/y"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}

Warning: No conversion function registered for layer: RealDiv yet.
Converting as custom op RealDiv truediv_7
name: "truediv_7"
op: "RealDiv"
input: "sub_7"
input: "truediv_7/y"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}

Warning: No conversion function registered for layer: RealDiv yet.
Converting as custom op RealDiv truediv_6
name: "truediv_6"
op: "RealDiv"
input: "sub_6"
input: "truediv_6/y"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}

Warning: No conversion function registered for layer: RealDiv yet.
Converting as custom op RealDiv truediv_5
name: "truediv_5"
op: "RealDiv"
input: "sub_5"
input: "truediv_5/y"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}

Warning: No conversion function registered for layer: RealDiv yet.
Converting as custom op RealDiv truediv_4
name: "truediv_4"
op: "RealDiv"
input: "sub_4"
input: "truediv_4/y"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}

Warning: No conversion function registered for layer: RealDiv yet.
Converting as custom op RealDiv truediv_3
name: "truediv_3"
op: "RealDiv"
input: "sub_3"
input: "truediv_3/y"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}

Warning: No conversion function registered for layer: RealDiv yet.
Converting as custom op RealDiv truediv_2
name: "truediv_2"
op: "RealDiv"
input: "sub_2"
input: "truediv_2/y"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}

Warning: No conversion function registered for layer: RealDiv yet.
Converting as custom op RealDiv truediv_1
name: "truediv_1"
op: "RealDiv"
input: "sub_1"
input: "truediv_1/y"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}

Warning: No conversion function registered for layer: RealDiv yet.
Converting as custom op RealDiv truediv
name: "truediv"
op: "RealDiv"
input: "sub"
input: "truediv/y"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}

Warning: No conversion function registered for layer: ExtractImagePatches yet.
Converting as custom op ExtractImagePatches ExtractImagePatches
name: "ExtractImagePatches"
op: "ExtractImagePatches"
input: "47-leaky"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}
attr {
  key: "ksizes"
  value {
    list {
      i: 1
      i: 2
      i: 2
      i: 1
    }
  }
}
attr {
  key: "padding"
  value {
    s: "VALID"
  }
}
attr {
  key: "rates"
  value {
    list {
      i: 1
      i: 1
      i: 1
      i: 1
    }
  }
}
attr {
  key: "strides"
  value {
    list {
      i: 1
      i: 2
      i: 2
      i: 1
    }
  }
}

Warning: No conversion function registered for layer: RealDiv yet.
Converting as custom op RealDiv truediv_20
name: "truediv_20"
op: "RealDiv"
input: "sub_20"
input: "truediv_20/y"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}

Warning: No conversion function registered for layer: Identity yet.
Converting as custom op Identity concat
name: "concat"
op: "Identity"
input: "29-leaky"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}

Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/uff/bin/convert_to_uff.py", line 109, in <module>
    main()
  File "/usr/local/lib/python2.7/dist-packages/uff/bin/convert_to_uff.py", line 104, in main
    output_filename=args.output
  File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/conversion_helpers.py", line 103, in from_tensorflow_frozen_model
    return from_tensorflow(graphdef, output_nodes, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/conversion_helpers.py", line 77, in from_tensorflow
    uff_metagraph_proto = uff_metagraph.to_uff()
  File "/usr/local/lib/python2.7/dist-packages/uff/model/meta_graph.py", line 39, in to_uff
    graphs=[graph.to_uff(debug) for graph in self.graphs],
  File "/usr/local/lib/python2.7/dist-packages/uff/model/graph.py", line 26, in to_uff
    graph = uff_pb.Graph(id=self.name, nodes=self._check_graph_and_get_nodes())
  File "/usr/local/lib/python2.7/dist-packages/uff/model/graph.py", line 46, in _check_graph_and_get_nodes
    raise extend_with_original_traceback(e, node._trace)
ValueError: Field name must be a string

Originally defined at:
  File "/usr/local/lib/python2.7/dist-packages/uff/bin/convert_to_uff.py", line 109, in <module>
    main()
  File "/usr/local/lib/python2.7/dist-packages/uff/bin/convert_to_uff.py", line 104, in main
    output_filename=args.output
  File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/conversion_helpers.py", line 103, in from_tensorflow_frozen_model
    return from_tensorflow(graphdef, output_nodes, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/conversion_helpers.py", line 75, in from_tensorflow
    name="main")
  File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py", line 64, in convert_tf2uff_graph
    uff_graph, input_replacements)
  File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py", line 51, in convert_tf2uff_node
    op, name, tf_node, inputs, uff_graph, tf_nodes=tf_nodes)
  File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py", line 29, in convert_layer
    uff_graph.custom_node(op, inputs, name, fields)
  File "/usr/local/lib/python2.7/dist-packages/uff/model/graph.py", line 233, in custom_node
    return self._add_node(op, name, inputs=inputs, fields=fields, extra_fields=extra_fields)
  File "/usr/local/lib/python2.7/dist-packages/uff/model/graph.py", line 65, in _add_node
    node = Node(self, op, name, inputs, fields, extra_fields)

I guess the error is from non-support layers. Can you teach me how to delete non-supported from .pb file?

Hi,

We usually remove the non-supported layer via TensorFlow source.
.pb is a frozen model, and it’s tricky to remove a layer from it.

Thanks.

Hi,

Just wanted to thank the @AastaLLL for his quick replies, I managed to remove the unsupported layers and convert my .pb file. Just now trying to get it going on the TensorRT Engine. Thanks a lot!

Hi everyone

I trying to optimize Vgg16 using TensorRT-3-GA, and just like @damilola_aleshinloye , here,

i am getting message for Non-supported layers as follows,

Loading /home/wahaj/Downloads/frozen.pb
Using output node vgg_16/fc8/BiasAdd
Converting to UFF graph
Warning: No conversion function registered for layer: Floor yet.
Converting as custom op Floor vgg_16/dropout7/dropout/Floor
name: "vgg_16/dropout7/dropout/Floor"
op: "Floor"
input: "vgg_16/dropout7/dropout/add"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}

Warning: No conversion function registered for layer: RandomUniform yet.
Converting as custom op RandomUniform vgg_16/dropout7/dropout/random_uniform/RandomUniform
name: "vgg_16/dropout7/dropout/random_uniform/RandomUniform"
op: "RandomUniform"
input: "vgg_16/dropout7/dropout/Shape"
attr {
  key: "T"
  value {
    type: DT_INT32
  }
}
attr {
  key: "dtype"
  value {
    type: DT_FLOAT
  }
}
attr {
  key: "seed"
  value {
    i: 0
  }
}
attr {
  key: "seed2"
  value {
    i: 0
  }
}

Warning: No conversion function registered for layer: RealDiv yet.
Converting as custom op RealDiv vgg_16/dropout7/dropout/div
name: "vgg_16/dropout7/dropout/div"
op: "RealDiv"
input: "vgg_16/fc7/Relu"
input: "vgg_16/dropout7/dropout/keep_prob"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}

Warning: No conversion function registered for layer: Floor yet.
Converting as custom op Floor vgg_16/dropout6/dropout/Floor
name: "vgg_16/dropout6/dropout/Floor"
op: "Floor"
input: "vgg_16/dropout6/dropout/add"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}

Warning: No conversion function registered for layer: RandomUniform yet.
Converting as custom op RandomUniform vgg_16/dropout6/dropout/random_uniform/RandomUniform
name: "vgg_16/dropout6/dropout/random_uniform/RandomUniform"
op: "RandomUniform"
input: "vgg_16/dropout6/dropout/Shape"
attr {
  key: "T"
  value {
    type: DT_INT32
  }
}
attr {
  key: "dtype"
  value {
    type: DT_FLOAT
  }
}
attr {
  key: "seed"
  value {
    i: 0
  }
}
attr {
  key: "seed2"
  value {
    i: 0
  }
}

Warning: No conversion function registered for layer: RealDiv yet.
Converting as custom op RealDiv vgg_16/dropout6/dropout/div
name: "vgg_16/dropout6/dropout/div"
op: "RealDiv"
input: "vgg_16/fc6/Relu"
input: "vgg_16/dropout6/dropout/keep_prob"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}

No. nodes: 110
UFF Output written to /home/wahaj/Downloads/k.uff

But in my case its ‘Warning’ only and .uff is created successfully

So i would like to clarify, that in the GA edition Unsupported Layers are skipped by default ??
or should I Remove them and add back via plugin API ??

Hi,

Uffparser will skip the non-supported layer automatically.

But for TensorRT 3, plugin API is not available for the uff-based user.(Ex. TensorFlow or pyTorch model).
That is, we don’t provide an interface for user to set their custom implementation with uff model.

We will enable plugin for uffparser in our future release but no concrete schedule yet.
Thanks and sorry for the inconvenience.

Thank you @AastaLLL for the info !
thats actually a worrying revelation …

But now, can you please guide how can I optimize, a TensorFlow pre-trained model with Non-supported layers, using TensorRT 3 ??

(since i am unable to directly parse it using UFF-parser due to Non-supported layers, and also unable to use Plugin-Factory for Non-supported layers with UFF-parser)

Hi,

We don’t provide an interface for a Tensorflow/UFF user to set their plugin implementation yet.
If you want to use a non-supported layer, please switch to Caffe framework and write it with plugin API.

Thanks and sorry for the inconvenience.

We are trying to create a trt engine from our saved uff file so that it can be run on the Jetson board. We are getting the following error. Any idea what is wrong?

[TensorRT] ERROR: Specified INT8 but no calibrator provided
Traceback (most recent call last):
  File "generate_prune_graph.py", line 50, in <module>
    engine = trt.utils.uff_to_trt_engine(G_LOGGER, uff_model, parser, 1, 1<<20, trt.infer.DataType.INT8)
  File "/usr/lib/python2.7/dist-packages/tensorrt/utils/_utils.py", line 182, in uff_to_trt_engine
    raise AttributeError("Specified INT8 but no calibrator provided")
AttributeError: Specified INT8 but no calibrator provided

If we change the datatype to HALF we get the following error

[TensorRT] ERROR: Specified FP16 but not supported on platform
Traceback (most recent call last):
  File "generate_prune_graph.py", line 50, in <module>
    engine = trt.utils.uff_to_trt_engine(G_LOGGER, uff_model, parser, 1, 1<<20, trt.infer.DataType.HALF)
  File "/usr/lib/python2.7/dist-packages/tensorrt/utils/_utils.py", line 177, in uff_to_trt_engine
    raise AttributeError("Specified FP16 but not supported on platform")
AttributeError: Specified FP16 but not supported on platform