[TensorRT Nano] Input node, feed_dict has boolean value for training, How do I inference on Nano?

Hi,
I have a tensorflow model based Resnet101.
It is a freeze .pb file.
I convert it UFF converter (ver.0.6.3),
It’s input nodes is below:

[name: “phase_train”
op: “Placeholder”
attr {
key: “dtype”
value {
type: DT_BOOL
}
}
attr {
key: “shape”
value {
shape {
unknown_rank: true
}
}
}

The probloem is “phase_train” value.
This is used for training or inference on our system.
And it is not problem in TensorFlow framework.

But in Nano with TensorRT,
UFF converter Error is just “not supported Switch layer” because many layers use “phase_train” value.
as you know, for inference mode, just set to false ‘feed_dict’ in Tensorflow session run.

I wonder if “All layer using phase_train like Switch layer must convert Custom layer ?”
or “Another solution like remove-node for inference or set to false value for train and Remake PB file ?”

My tensorflow pb file also has “Dropout” layser.
I found that “Remove dropout layer from frozen-model using python code in tensorflow”.
(link: https://dato.ml/drop-dropout-from-frozen-model )

I wonder if there is such a way “Remove unnecessary switch layer for inference”.

Thanks

Hi,

you need to remove the training nodes before exporting the frozen graph. Please do not attempt any graph hacks (as suggested by the article you referred to).

This is my way to export a frozen graph for inferencing (tested with Tensorflow 1.13.1 and Keras 2.2.4):

from keras.models import load_model
from keras import backend as K

import tensorflow as tf

KERAS_MODEL_FILE = 'keras_model.h5'
FROZEN_MODEL_FILE = 'keras_frozen_model.pb'

if __name__ == "__main__":
	print("Freezing graph...")

	K.clear_session()
	K.set_learning_phase(0)

	model = load_model(KERAS_MODEL_FILE)
	model.summary()

	sess = K.get_session()

	out_name = model.output.name.split(':')[0]
	print("Output layer: " + out_name)

	frozen_graph = tf.graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(), [out_name])
	frozen_graph = tf.graph_util.remove_training_nodes(frozen_graph)

	with open(FROZEN_MODEL_FILE, "wb") as ofile:
		ofile.write(frozen_graph.SerializeToString())

	print("Finished!")

Please note that “K.set_learning_phase(0)” and “tf.graph_util.remove_training_nodes(frozen_graph)” remove the training nodes that are not needed for inferencing.

Hi,

Resnet 101 is one of our example models.
You can check our tutorial for information first:
https://github.com/NVIDIA-AI-IOT/tf_to_trt_image_classification

Thanks.

Thanks your Reply, klicker100

But, my model is tensorflow not Keras.
So I can’t run your solution.

Thanks AastaLLL.

I checked your tutorial and It works well.
Generating “pb file” from “ckpt file”, Converting “UFF file” and Using in TensorRT are well done.

But, My Network is not.
My network is based on Resnet101 and changed some layers.
By running your tutorial, My network is converted “pb file” successfully.
But, it has still attribute that “is_training = True” like BatchNorm layers.
So, UFF converting make warnings like “Not support Merge/Switch layer”.

What is helpful example for me?

And I’m so curious,
If I using custom layer for “Merge/Switch”, It is not layer but node.

UFF Parser result is,
“Warning: No convertion function registered for layer: Switch yet.
Converting resnet50v2/resnet_v2_50/block1/unit_1/bottleneck_v2/preact/cond/FusedBatchNorm/Switch as Custom op: Switch
Warning: No convertion function registered for layer: Switch yet.
Converting resnet50v2/resnet_v2_50/block1/unit_1/bottleneck_v2/preact/cond/FusedBatchNorm/Switch_2 as Custom op: Switch
Warning: No convertion function registered for layer: Switch yet.
Converting resnet50v2/resnet_v2_50/block1/unit_1/bottleneck_v2/preact/cond/FusedBatchNorm/Switch_1 as Custom op: Switch”

I read “4.1.2. Example 2: Adding A Custom Layer That Is Not Supported In UFF Using C++”.
It is about changing “MyRelu6” layer to custom plugin.
But, above UFF parser result shows, there are many layers using “Swtich op”.

In this case, Every layers using “swtich op” must be change Custom plugin ?

Thanks.

Hi,

It looks like your model has some non-supported operations.

Do you use tf.layers.batch_normalization() inside your model?
This layer includes three operations: FusedBatchNorm, Merge and Switch.
Only FusedBatchNorm is supported by TensorRT currently.

Is possible to use FusedBatchNorm directly on your use case?

Thanks.