convert_to_uff.py fails while trying to convert Tensorflow frozen model

Hello,

I’m trying to convert the frozen graph of a Tensorflow model to UFF format in order to load it into a TensorRT.

However the convert_to_uff.py script fails with the following error:

Using output node final_dense/Sigmoid
Converting to UFF graph
Traceback (most recent call last):
File “convert_to_uff.py”, line 93, in
main()
File “convert_to_uff.py”, line 89, in main
debug_mode=args.debug
File “C:\Users\spivakov\AppData\Local\conda\conda\envs\tensorflow-gpu\lib\site-packages\uff\converters\tensorflow\conversion_helpers.py”, line 187, in from_tensorflow_frozen_model
return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
File “C:\Users\spivakov\AppData\Local\conda\conda\envs\tensorflow-gpu\lib\site-packages\uff\converters\tensorflow\conversion_helpers.py”, line 157, in from_tensorflow
debug_mode=debug_mode)
File “C:\Users\spivakov\AppData\Local\conda\conda\envs\tensorflow-gpu\lib\site-packages\uff\converters\tensorflow\converter.py”, line 94, in convert_tf2uff_graph
uff_graph, input_replacements, debug_mode=debug_mode)
File “C:\Users\spivakov\AppData\Local\conda\conda\envs\tensorflow-gpu\lib\site-packages\uff\converters\tensorflow\converter.py”, line 72, in convert_tf2uff_node
inp_node = tf_nodes[inp_name]
KeyError: ‘final_dense/bias/read’

Here is the list of my model layers, the model uses layers which are supported by a TensorRT:
1 Placeholder: “input_left”
2 Placeholder: “input_right”
3 Const: “conv1/kernel”
4 Const: “conv1/bias”
5 Const: “batch_normalization_1/gamma”
6 Const: “batch_normalization_1/beta”
7 Const: “batch_normalization_1/moving_mean”
8 Const: “batch_normalization_1/moving_variance”
9 Const: “conv2/kernel”
10 Const: “conv2/bias”
11 Const: “batch_normalization_2/gamma”
12 Const: “batch_normalization_2/beta”
13 Const: “batch_normalization_2/moving_mean”
14 Const: “batch_normalization_2/moving_variance”
15 Const: “conv3/conv3_1/kernel”
16 Const: “conv3/conv3_1/bias”
17 Const: “conv3/conv3_2/kernel”
18 Const: “conv3/conv3_2/bias”
19 Const: “batch_normalization_3/gamma”
20 Const: “batch_normalization_3/beta”
21 Const: “batch_normalization_3/moving_mean”
22 Const: “batch_normalization_3/moving_variance”
23 Const: “conv4/conv4_1/kernel”
24 Const: “conv4/conv4_1/bias”
25 Const: “conv4/conv4_2/kernel”
26 Const: “conv4/conv4_2/bias”
27 Const: “batch_normalization_4/gamma”
28 Const: “batch_normalization_4/beta”
29 Const: “batch_normalization_4/moving_mean”
30 Const: “batch_normalization_4/moving_variance”
31 Conv2D: “model_1/conv1/convolution”
32 BiasAdd: “model_1/conv1/BiasAdd”
33 Relu: “model_1/conv1/Relu”
34 MaxPool: “model_1/pool1/MaxPool”
35 FusedBatchNorm: “model_1/batch_normalization_1/FusedBatchNorm_1”
36 Conv2D: “model_1/conv2/convolution”
37 BiasAdd: “model_1/conv2/BiasAdd”
38 Relu: “model_1/conv2/Relu”
39 MaxPool: “model_1/pool2/MaxPool”
40 FusedBatchNorm: “model_1/batch_normalization_2/FusedBatchNorm_1”
41 Conv2D: “model_1/conv3/conv3_1/convolution”
42 BiasAdd: “model_1/conv3/conv3_1/BiasAdd”
43 Relu: “model_1/conv3/conv3_1/Relu”
44 Conv2D: “model_1/conv3/conv3_2/convolution”
45 BiasAdd: “model_1/conv3/conv3_2/BiasAdd”
46 Relu: “model_1/conv3/conv3_2/Relu”
47 MaxPool: “model_1/pool3/MaxPool”
48 FusedBatchNorm: “model_1/batch_normalization_3/FusedBatchNorm_1”
49 Conv2D: “model_1/conv4/conv4_1/convolution”
50 BiasAdd: “model_1/conv4/conv4_1/BiasAdd”
51 Relu: “model_1/conv4/conv4_1/Relu”
52 Conv2D: “model_1/conv4/conv4_2/convolution”
53 BiasAdd: “model_1/conv4/conv4_2/BiasAdd”
54 Relu: “model_1/conv4/conv4_2/Relu”
55 MaxPool: “model_1/pool4/MaxPool”
56 FusedBatchNorm: “model_1/batch_normalization_4/FusedBatchNorm_1”
57 Shape: “model_1/flatten_1/Shape”
58 Const: “model_1/flatten_1/strided_slice/stack”
59 Const: “model_1/flatten_1/strided_slice/stack_1”
60 Const: “model_1/flatten_1/strided_slice/stack_2”
61 StridedSlice: “model_1/flatten_1/strided_slice”
62 Const: “model_1/flatten_1/Const”
63 Prod: “model_1/flatten_1/Prod”
64 Const: “model_1/flatten_1/stack/0”
65 Pack: “model_1/flatten_1/stack”
66 Reshape: “model_1/flatten_1/Reshape”
67 Conv2D: “model_1_1/conv1/convolution”
68 BiasAdd: “model_1_1/conv1/BiasAdd”
69 Relu: “model_1_1/conv1/Relu”
70 MaxPool: “model_1_1/pool1/MaxPool”
71 FusedBatchNorm: “model_1_1/batch_normalization_1/FusedBatchNorm_1”
72 Conv2D: “model_1_1/conv2/convolution”
73 BiasAdd: “model_1_1/conv2/BiasAdd”
74 Relu: “model_1_1/conv2/Relu”
75 MaxPool: “model_1_1/pool2/MaxPool”
76 FusedBatchNorm: “model_1_1/batch_normalization_2/FusedBatchNorm_1”
77 Conv2D: “model_1_1/conv3/conv3_1/convolution”
78 BiasAdd: “model_1_1/conv3/conv3_1/BiasAdd”
79 Relu: “model_1_1/conv3/conv3_1/Relu”
80 Conv2D: “model_1_1/conv3/conv3_2/convolution”
81 BiasAdd: “model_1_1/conv3/conv3_2/BiasAdd”
82 Relu: “model_1_1/conv3/conv3_2/Relu”
83 MaxPool: “model_1_1/pool3/MaxPool”
84 FusedBatchNorm: “model_1_1/batch_normalization_3/FusedBatchNorm_1”
85 Conv2D: “model_1_1/conv4/conv4_1/convolution”
86 BiasAdd: “model_1_1/conv4/conv4_1/BiasAdd”
87 Relu: “model_1_1/conv4/conv4_1/Relu”
88 Conv2D: “model_1_1/conv4/conv4_2/convolution”
89 BiasAdd: “model_1_1/conv4/conv4_2/BiasAdd”
90 Relu: “model_1_1/conv4/conv4_2/Relu”
91 MaxPool: “model_1_1/pool4/MaxPool”
92 FusedBatchNorm: “model_1_1/batch_normalization_4/FusedBatchNorm_1”
93 Shape: “model_1_1/flatten_1/Shape”
94 Const: “model_1_1/flatten_1/strided_slice/stack”
95 Const: “model_1_1/flatten_1/strided_slice/stack_1”
96 Const: “model_1_1/flatten_1/strided_slice/stack_2”
97 StridedSlice: “model_1_1/flatten_1/strided_slice”
98 Const: “model_1_1/flatten_1/Const”
99 Prod: “model_1_1/flatten_1/Prod”
100 Const: “model_1_1/flatten_1/stack/0”
101 Pack: “model_1_1/flatten_1/stack”
102 Reshape: “model_1_1/flatten_1/Reshape”
103 Sub: “lambda_1/sub”
104 Abs: “lambda_1/Abs”
105 Const: “final_dense/kernel”
106 Const: “final_dense/bias”
107 MatMul: “final_dense/MatMul”
108 BiasAdd: “final_dense/BiasAdd”
109 Sigmoid: “final_dense/Sigmoid”

Any clue what am I doing wrong?

Thank you.
vggish_siamese.7z (16.1 MB)

Hello, can you share the full convert_to_uff.py command you used?

Please find attached.

python convert_to_uff.py --input-file vggish_siamese.pb

I tried different variations though, with explicitly defining the output node, but without any success.

Well, I found what the problem is: The converter does not support the op ‘read’. However, every layer of
Tensorflow graph has ‘read’ op for kernel and bias. So I’m stuck here with misunderstanding how can I solve this. I use Tensorflow 1.12.0 version for saving a frozen model.
Any suggestions?

I solved the problem!
Need to add to a code that saves the frozen graph the command to remove the training nodes:

graph = graph_util.remove_training_nodes(graph)