Help needed while using Tensor RT 3 to create inference engine for facenet model.

Hello,

Tensor RT is a good tool. I am trying to optimize the inference for face recognition with the help of tensorRT. However, by following all the instructions as per the new 3.0 release documentation for converting tensorflow models directly. I am facing error stating that some layers are not supported to be converted to UFF by the conversion tools from the TensorRT 3 source. It would be great if anyone here has tried converting the facenet ([url]https://github.com/davidsandberg/facenet[/url]) and would like to help me.

If not, I am pretty sure that for any moderator here who has worked with TensorRT 3, it would take very less time. I would really appreciate any help.

Thanks a lot in advance.

Achyut Boggaram

Hi,

Could you share the log of UFF parser?
For uff parser, there is no API to implement a non-supported layer.

It’s recommended to use Caffe framework if you have a non-supported layer.
For caffe parer, you can implement it with a custom code via Plugin API.

More, we also have a face-recognition sample with one detection and one classification networks.
For your reference: GitHub - AastaNV/Face-Recognition: Demonstrate Plugin API for TensorRT2.1

Thanks.

Hi,

I am facing the same issue. For the following code:

trt_engine = trt.lite.Engine(framework="tf", path=PATH_TO_FACENET_MODEL, max_batch_size=10, input_nodes=["input", "phase_train"], output_nodes=["embeddings"])
results = trt_engine.infer(img)

where PATH_TO_FACENET_MODEL points to 20170512-110547.pb, which can be downloaded from the link provided in the readme (GitHub - davidsandberg/facenet: Face recognition using Tensorflow), I get warnings related to conversion functions and errors related to node keys not found (see below):

[TensorRT] INFO: Detecting Framework
Using output node embeddings
Converting to UFF graph
Warning: keep_dims is not supported, ignoring...
Warning: No conversion function registered for layer: Square yet.
Converting as custom op Square embeddings/Square
name: "embeddings/Square"
op: "Square"
input: "InceptionResnetV1/Bottleneck/BatchNorm/batchnorm/add_1"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}

Warning: No conversion function registered for layer: Merge yet.
Converting as custom op Merge InceptionResnetV1/Bottleneck/BatchNorm/cond/Merge_1
name: "InceptionResnetV1/Bottleneck/BatchNorm/cond/Merge_1"
op: "Merge"
input: "InceptionResnetV1/Bottleneck/BatchNorm/cond/Switch_2"
input: "InceptionResnetV1/Bottleneck/BatchNorm/cond/Identity_1"
attr {
  key: "N"
  value {
    i: 2
  }
}
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}

Warning: No conversion function registered for layer: Switch yet.
Converting as custom op Switch InceptionResnetV1/Bottleneck/BatchNorm/cond/AssignMovingAvg_1/sub/Switch_1
name: "InceptionResnetV1/Bottleneck/BatchNorm/cond/AssignMovingAvg_1/sub/Switch_1"
op: "Switch"
input: "InceptionResnetV1/Bottleneck/BatchNorm/moments/normalize/variance"
input: "InceptionResnetV1/Bottleneck/BatchNorm/cond/pred_id"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}
attr {
  key: "_class"
  value {
    list {
      s: "loc:@InceptionResnetV1/Bottleneck/BatchNorm/moments/normalize/variance"
    }
  }
}

Warning: No conversion function registered for layer: Square yet.
Converting as custom op Square InceptionResnetV1/Bottleneck/BatchNorm/moments/normalize/Square
name: "InceptionResnetV1/Bottleneck/BatchNorm/moments/normalize/Square"
op: "Square"
input: "InceptionResnetV1/Bottleneck/BatchNorm/moments/normalize/shifted_mean"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}

Traceback (most recent call last):
  File "get_faces_trt.py", line 256, in <module>
    main(sys.argv)
  File "get_faces_trt.py", line 169, in main
    trt_engine = trt.lite.Engine(framework="tf", path=PATH_TO_FACENET_MODEL, max_batch_size=10, input_nodes=["input", "phase_train"], output_nodes=["embeddings"])
  File "/usr/lib/python2.7/dist-packages/tensorrt/lite/engine.py", line 171, in __init__
    modelstream = uff.from_tensorflow_frozen_model(path, output_nodes)
  File "/usr/lib/python2.7/dist-packages/uff/converters/tensorflow/conversion_helpers.py", line 103, in from_tensorflow_frozen_model
    return from_tensorflow(graphdef, output_nodes, **kwargs)
  File "/usr/lib/python2.7/dist-packages/uff/converters/tensorflow/conversion_helpers.py", line 75, in from_tensorflow
    name="main")
  File "/usr/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py", line 64, in convert_tf2uff_graph
    uff_graph, input_replacements)
  File "/usr/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py", line 46, in convert_tf2uff_node
    inp_node = tf_nodes[inp_name]
KeyError: u'^InceptionResnetV1/Bottleneck/BatchNorm/moments/sufficient_statistics/mean_ss'
[TensorRT] INFO: Tearing down engine

I could avoid the errors by checking if the key is in the list, but maybe the conversion would not be valid anymore. Anyone has any idea on how to tackle this? Thanks!

Hi,

From the error, there are some non-supported layers from the given model.
Please remember to check if all of layers are available on current TensorRT libraries.

We have listed the supported layer for UFF parser and TensorRT engine in detail:
UFF parser: [url]Developer Guide :: NVIDIA Deep Learning TensorRT Documentation
TensorRT engine: [url]Developer Guide :: NVIDIA Deep Learning TensorRT Documentation

If any limitation blocking your development, please let us know by fill out this survey:
http://go.nvidianews.com/a0015609EO0h00mNGFMlvUE

Thanks.

Hello @AastaLLL,

Please find the UFF parser log below:

Converting to UFF graph
Warning: keep_dims is not supported, ignoring…
Warning: No conversion function registered for layer: Square yet.
Converting as custom op Square embeddings/Square
name: “embeddings/Square”
op: “Square”
input: “InceptionResnetV1/Bottleneck/BatchNorm/batchnorm/add_1”
attr {
key: “T”
value {
type: DT_FLOAT
}
}

Traceback (most recent call last):
File “Desktop/facenet_graph.py”, line 29, in
uff_model = uff.from_tensorflow(tf_model, [“embeddings”])
File “/home/ovuser/tensorflow/local/lib/python2.7/site-packages/uff/converters/tensorflow/conversion_helpers.py”, line 75, in from_tensorflow
name=“main”)
File “/home/ovuser/tensorflow/local/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py”, line 64, in convert_tf2uff_graph
uff_graph, input_replacements)
File “/home/ovuser/tensorflow/local/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py”, line 51, in convert_tf2uff_node
op, name, tf_node, inputs, uff_graph, tf_nodes=tf_nodes)
File “/home/ovuser/tensorflow/local/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py”, line 28, in convert_layer
fields = cls.parse_tf_attrs(tf_node.attr)
File “/home/ovuser/tensorflow/local/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py”, line 177, in parse_tf_attrs
for key, val in attrs.items()}
File “/home/ovuser/tensorflow/local/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py”, line 177, in
for key, val in attrs.items()}
File “/home/ovuser/tensorflow/local/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py”, line 172, in parse_tf_attr_value
return cls.convert_tf2uff_field(code, val)
File “/home/ovuser/tensorflow/local/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py”, line 146, in convert_tf2uff_field
return TensorFlowToUFFConverter.convert_tf2numpy_dtype(val)
File “/home/ovuser/tensorflow/local/lib/python2.7/site-packages/uff/converters/tensorflow/converter.py”, line 74, in convert_tf2numpy_dtype
return np.dtype(dt[dtype])
TypeError: list indices must be integers, not AttrValue

I can understand that there is no support for some of the layers such as square, etc., But I need this to happen. Is there a way I can make this conversion to TensorRT engine myself? If so, could you please assist me in doing so?

Hi,

Unary op is supported by TensorRT but not for uff parser. (Only available for Caffe and API user)
[url]Developer Guide :: NVIDIA Deep Learning TensorRT Documentation

Unary support in UFF is in our plan but we cannot share the concrete schedule.
Please pay attention to our announcement for the latest update.

Thanks.

Hi @AastaLLL,

We are very much in need of control flow ops support in UFF. Especially, Merge and Switch control flow ops. Please let me know if your team can provide support for these two operations as early as possible. Thanks a million for adding the unary ops support in TensorRT 4.0.0.3 by the way.

Thanks a lot in advance

Thanks for the feedback.

It is also recommended to fill this user experience survey:
http://go.nvidianews.com/a0015609EO0h00mNGFMlvUE

Hello @AastaLLL,

I would really appreciate if there is an example of converting a tensorflow model which uses Batch Normalization into tensorRT engine.

Thanks,
Achyut Boggaram

Hi,

There is no available example for batch normalization of TensorFlow to TensorRT.
But the FusedBatchNorm is supported by TensorRT 4.0.
[url]Developer Guide :: NVIDIA Deep Learning TensorRT Documentation

Thanks.

Hi, If that is the case, can you please request the TensorRT team to provide one small example which uses Batch Normalization with tensorflow?

Thanks a lot,
Achyut Boggaram

Hi,

Could you check if this GitHub helps?

For example, Batch Normalization layer is used inside the ResNet model.
You can update the flow of ResNet for your use case.

Thanks.

Hi,

Could you check if this GitHub helps?
[url]https://github.com/JerryJiaGit/facenet_trt[/url]

I have an implementation with python TensorRT API to do facenet inference with Tensorflow.

I can see about 30% speed improvement with TensorRT for ResNet on GV100.

But looks TensorRT has no ARM64 python support so far, so you can try on Volta/Turing GPU at first.

FYI.
Thanks,.
Jerry