Fail to convert an Inception-V3 model to uff.

Hi,

I’m new for TensorRT. And I’m trying to convert a Inception-V3 model (the model is attached) to uff with TensorRT 4.0. However, I got the error below:

Using output node softmax
Converting to UFF graph
Warning: No conversion function registered for layer: Concat yet.
Converting as custom op Concat mixed_10/join
name: "mixed_10/join"
op: "Concat"
input: "mixed_10/join/concat_dim"
input: "mixed_10/conv"
input: "mixed_10/tower/mixed/conv"
input: "mixed_10/tower/mixed/conv_1"
input: "mixed_10/tower_1/mixed/conv"
input: "mixed_10/tower_1/mixed/conv_1"
input: "mixed_10/tower_2/conv"
attr {
  key: "N"
  value {
    i: 6
  }
}
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}

Traceback (most recent call last):
  File "tensorrt_gen.py", line 24, in <module>
    uff_model =uff.from_tensorflow_frozen_model(frozen_file="./classify_image_graph_def.pb",output_nodes=["softmax"],output_filename='tmp/',text=False)
  File "/usr/lib/python2.7/site-packages/uff/converters/tensorflow/conversion_helpers.py", line 113, in from_tensorflow_frozen_model
    return from_tensorflow(tf_graphdef, output_nodes, **kwargs)
  File "/usr/lib/python2.7/site-packages/uff/converters/tensorflow/conversion_helpers.py", line 77, in from_tensorflow
    name="main")
  File "/usr/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py", line 74, in convert_tf2uff_graph
    uff_graph, input_replacements)
  File "/usr/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py", line 61, in convert_tf2uff_node
    op, name, tf_node, inputs, uff_graph, tf_nodes=tf_nodes)
  File "/usr/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py", line 31, in convert_layer
    fields = cls.parse_tf_attrs(tf_node.attr)
  File "/usr/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py", line 201, in parse_tf_attrs
    for key, val in attrs.items()}
  File "/usr/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py", line 201, in <dictcomp>
    for key, val in attrs.items()}
  File "/usr/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py", line 196, in parse_tf_attr_value
    return cls.convert_tf2uff_field(code, val)
  File "/usr/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py", line 170, in convert_tf2uff_field
    return TensorFlowToUFFConverter.convert_tf2numpy_dtype(val)
  File "/usr/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py", line 87, in convert_tf2numpy_dtype
    return np.dtype(dt[dtype])
TypeError: list indices must be integers, not AttrValue

Does it mean the ‘Concat’ layer is not supported by TensorRT 4.0? If so, the TensorRT is a little unpractical.

Any help would be appreciated!

Sorry, I failed to add attachment. There’s no response after I select the pb model file to upload. Is there any default restriction such as file size? My model is about 90MB.

Hi,

Concat layer is supported by TensorRT but please remember to use NCHW format.

We have an example to convert inception_v3 here for your reference:

Thanks.

Hey Aasta,

I just attempted to convert Inception_v3, had an uff parser error and stumbled upon this post. I was using the model downloaded from the above link independently. It results in uff being unable to convert its FusedBatchNormV3 layer:

Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting InceptionV3/InceptionV3/Mixed_7c/Branch_3/Conv2d_0b_1x1/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting InceptionV3/InceptionV3/Mixed_7b/Branch_3/Conv2d_0b_1x1/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting InceptionV3/InceptionV3/Mixed_6e/Branch_3/Conv2d_0b_1x1/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting InceptionV3/InceptionV3/Mixed_6d/Branch_3/Conv2d_0b_1x1/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting InceptionV3/InceptionV3/Mixed_6c/Branch_3/Conv2d_0b_1x1/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting InceptionV3/InceptionV3/Mixed_6b/Branch_3/Conv2d_0b_1x1/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting InceptionV3/InceptionV3/Mixed_5d/Branch_3/Conv2d_0b_1x1/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting InceptionV3/InceptionV3/Mixed_5c/Branch_3/Conv2d_0b_1x1/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3
Warning: No conversion function registered for layer: FusedBatchNormV3 yet.
Converting InceptionV3/InceptionV3/Mixed_5b/Branch_3/Conv2d_0b_1x1/BatchNorm/FusedBatchNormV3 as custom op: FusedBatchNormV3

thoughts on a fix?

Thanks

Edit: on closer inspection the uff parser also fails to convert inception_v1, v2, and v4, weirdly I converted inception_v1 a couple weeks ago when I was using tensorRT 4.1 and have since updated to tensorRT 4.2 and this is my first time trying to convert the inception models in 4.2, so this is likely a tensorRT bug

Hi,

Do you mean JetPack4.2?
We have TensorRT 5.0 now. Would you mind to update your package and try it again first?

Thanks.

Hey Aasta,

yeah I misspoke, it was JetPAck4.2 and Tensorrt 5.0.6.3, it was essentially after I reflashed in this thread here: https://devtalk.nvidia.com/default/topic/1052449/jetson-agx-xavier/cuda-memory-error-when-enabling-the-dla/post/5349947/#5349947

I tested the uff converter to continue to get timing data while the other problem is being looked at and arrived at my current results above.

Thoughts?

I can also confirm this. I was able to convert frozen graphs to uff files for ResNet50 on Jetpack 4.1. After upgrading Jetpack to 4.2, convert-to-uff started to complain about FusedBatchNormV3 not being supported. I also had the same issue on my desktop using TensorRT 5.1. So I downgraded to TensorRT 5.0 which was used in JetPack 4.1 and the issue was resolved.

However, now for InceptionV3, it is complaining about Fill layer not being supported! Any ideas?

Warning: No conversion function registered for layer: Fill yet.
Converting batch_normalization_80/ones_like as custom op: Fill

Hi,

Fill is not in our support scope currently:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-support-matrix/index.html

To unblock this issue, you can implement it with our plugin API.
Thanks.

Hello AastaLLL,

The InceptionV3 from native tensorflow seems to be fine when converted to UFF. However, I am using InceptionV3 from Keras applications. It will be tremendous effort for us to rewrite our training methodology in native tensorflow.

Has anyone tried to write a custom plugin for Fill layer? If not is there any step by step guide for writing it? Writing a custom plugin also seems to need a lot effort.

Thanks,

Hi,

We have just released a new JetPack installer which contains the latest TensorRT 5.1 package.
Would you mind to give it a try? It consists of several parser updates.

Thanks.

Hello AastaLLL,

I upgraded my JetPack and it now uses TensorRT 5.1.6. However, I still get the same error.

Is the new TensorRT supposed to support Fill/Ones_like layer?

Thanks,

I could fix my issue here:
https://devtalk.nvidia.com/default/topic/1057651/tensorrt/implementing-fill-layer-as-custom-plugin/post/5430414/#5430414