Uff converter - input attributes not updated after freezing

Hello to everyone.
I am trying to convert a tensorflow model in the uff format to eventually use the TensorRT engine. However, I encountered the following issue:

uff_model = uff.from_tensorflow_frozen_model(output_frozen_graph_name, [model_output], output_filename = “model.uff” )
File “/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/conversion_helpers.py”, line 161, in from_tensorflow_frozen_model
return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
File “/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/conversion_helpers.py”, line 132, in from_tensorflow
name=“main”)
File “/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py”, line 97, in convert_tf2uff_graph
uff_graph, input_replacements)
File “/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py”, line 67, in convert_tf2uff_node
inp_node = tf_nodes[inp_name]
KeyError: u’lstm_1/while/MatMul_2’

What I have done
Looking at the original graph, there exists the node:
node {
name: “lstm_1/while/BiasAdd_2”
op: “BiasAdd”
input: “lstm_1/while/MatMul_2”
input: “lstm_1/while/BiasAdd_2/Enter”
device: “/device:CPU:0”
attr {
key: “T”
value {
type: DT_FLOAT
}
}
attr {
key: “data_format”
value {
s: “NHWC”
}
}
}
The frozen graph does not have anymore some nodes from the original graph. But the node which has as input this node is in the frozen graph, with the input attributes not updated:
#lstm_2/while/BiasAdd_2##BiasAdd##lstm_2/while/MatMul_2##lstm_2/while/BiasAdd_2/Enter"
/device:CPU:0*#

There is no trace of the node “lstm_2/while/MatMul_2” in the frozen graph and this is the thing which raises the error. May I ask how can I handle this? Why this is happening? Why the freezing operation does not produce a “coherent” graph?

Thank you for the time and the attention.

Daniele

Inspecting the .pb files, I figureed out the following;

Hi,

Frozen is a mechanism of TensorFlow to remove the non-necessary node when inferencing.
As a result, this issue occurs with TensorFlow. Not ralated to our UFF parser.

Have you set the lstm_1/while/MatMul_2 node as output?
By doing so, TensorFlow should reserve the node so that you can find it on the frozen graph.

Thanks.

Thank you for your answer.

Indeed, if I set the lstm_1/while/MatMul_2 node as output, the error does not appear anymore. However:

  • This operation of setting other outputs does not affect the network? I mean, I am changing the structure of the network. Will I have the same expected behavior?

-About calling the uff parser without freezing the model: when I tried, the “Enter” operational node of the BiasAdd node leds to the error:

/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py", line 116, in convert_tf2numpy_dtype
return np.dtype(dt[dtype])

so basically it was an invalid type. This is curious, because BiasAdd is a supported node (https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#support_op).

That’s why I tried to freezing the graph before.

Update: if a make as output the nodes which cause troubles, I eventually have the error

File “/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py”, line 142, in convert_tf2numpy_const_node
np_dtype = cls.convert_tf2numpy_dtype(tf_node.attr[‘dtype’].type)
File “/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py”, line 116, in convert_tf2numpy_dtype
return np.dtype(dt[dtype])
TypeError: data type “invalid” not understood

due to the node

name: “lstm_1/while/BiasAdd_2/Enter”
op: “Enter”
input: “lstm_1/strided_slice_10”
attr {
key: “T”
value {
type: DT_FLOAT
}
}
attr {
key: “frame_name”
value {
s: “lstm_1/while/while_context”
}
}
attr {
key: “is_constant”
value {
b: true
}
}
attr {
key: “parallel_iterations”
value {
i: 32
}
}
attr {
key: “value”
value {
}
}

Still, I don’t know why, becase the BiasAdd node is supported (https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#support_op). May you please give some some tips?

Hi,

Sorry for the late reply.

Please noticed that the real operation is defined with the parameter ‘op’.
So the node you shared is an ‘Enter’ operation rather than ‘BiasAdd’.
And unfortunately, the Enter op is not supported by TensorRT currently.

Thanks

Dear AastaLLL,

thank you for your reply. Do you think that in future that node will be supported or it is better to find another way? I mean, such node is a product of a tensorflow “conversion” from the keras node LSTM. The long short term memory is exaclty the point of my network, I cannot replace it.

Thanks,

Daniele