Converting facenet.h5 to .pb

I am trying to convert .h5 to .pb using the steps mentioned in:
Speeding up Deep Learning Inference Using TensorFlow, ONNX, and TensorRT | NVIDIA Developer Blog
But no .pb file is generated.
The code is as follows.
import tensorflow as tf
from tensorflow.compat.v1 import graph_util
from tensorflow.keras.models import Model
from tensorflow.python.keras import backend as K
from tensorflow.keras.models import load_model
import argparse

tf.compat.v1.disable_eager_execution()
K.set_learning_phase(0)

def keras_to_pb(model, output_filename, output_node_names):

“”"
This is the function to convert the Keras model to pb.
Args:
model: The Keras model.
output_filename: The output .pb file name.
output_node_names: The output nodes of the network. If None, then
the function gets the last layer name as the output node.
“”"

Get the names of the input and output nodes.

in_name = model.layers[0].get_output_at(0).name.split(‘:’)[0]

if output_node_names is None:
output_node_names = [model.layers[-1].get_output_at(0).name.split(‘:’)[0]]

sess = K.get_session()

The TensorFlow freeze_graph expects a comma-separated string of output node names.

output_node_names_tf = ‘,’.join(output_node_names)

frozen_graph_def = graph_util.convert_variables_to_constants(
sess,
sess.graph.as_graph_def(),
output_node_names)

sess.close()
wkdir = ‘’
tf.io.write_graph(frozen_graph_def, wkdir, output_filename, as_text=False)

return in_name, output_node_names

def main(args):
# load ResNet50 model pre-trained on imagenet
model = load_model(args.model_path)

# Convert keras ResNet50 model to .bp file
in_tensor_name, out_tensor_names = keras_to_pb(model, args.output_pb_file , None) 

if name == ‘main’:
parser = argparse.ArgumentParser()
parser.add_argument(‘–model_path’, type=str, default=‘facenet_keras.h5’)
parser.add_argument(‘–output_pb_file’, type=str, default=‘facenet.pb’)
args=parser.parse_args()
main(args)

Error:

2021-06-22 15:24:25.221701: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
/home/ubuntu/.local/lib/python3.6/site-packages/tensorflow/python/keras/backend.py:435: UserWarning: tf.keras.backend.set_learning_phase is deprecated and will be removed after 2020-10-11. To update it, simply pass a True/False value to the training argument of the __call__ method of your layer or model.
warnings.warn('tf.keras.backend.set_learning_phase is deprecated and ’
2021-06-22 15:24:26.898438: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcuda.so.1
2021-06-22 15:24:26.955760: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-06-22 15:24:26.956356: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties:
pciBusID: 0000:00:1e.0 name: Tesla T4 computeCapability: 7.5
coreClock: 1.59GHz coreCount: 40 deviceMemorySize: 14.75GiB deviceMemoryBandwidth: 298.08GiB/s
2021-06-22 15:24:26.956386: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-06-22 15:24:26.959579: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublas.so.11
2021-06-22 15:24:26.959640: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublasLt.so.11
2021-06-22 15:24:26.960701: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcufft.so.10
2021-06-22 15:24:26.960994: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcurand.so.10
2021-06-22 15:24:26.964340: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusolver.so.11
2021-06-22 15:24:26.965160: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusparse.so.11
2021-06-22 15:24:26.965364: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudnn.so.8
2021-06-22 15:24:26.965448: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-06-22 15:24:26.966038: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-06-22 15:24:26.966589: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
WARNING:tensorflow:From /home/ubuntu/.local/lib/python3.6/site-packages/keras/layers/normalization.py:524: _colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
2021-06-22 15:24:32.904480: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-06-22 15:24:32.904888: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-06-22 15:24:32.905495: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties:
pciBusID: 0000:00:1e.0 name: Tesla T4 computeCapability: 7.5
coreClock: 1.59GHz coreCount: 40 deviceMemorySize: 14.75GiB deviceMemoryBandwidth: 298.08GiB/s
2021-06-22 15:24:32.905588: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-06-22 15:24:32.906149: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-06-22 15:24:32.906693: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-06-22 15:24:32.906746: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-06-22 15:24:33.515833: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-06-22 15:24:33.515872: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264] 0
2021-06-22 15:24:33.515883: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0: N
2021-06-22 15:24:33.516045: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-06-22 15:24:33.516648: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-06-22 15:24:33.517203: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-06-22 15:24:33.517743: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 13803 MB memory) → physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:1e.0, compute capability: 7.5)
2021-06-22 15:24:34.122337: I tensorflow/core/platform/profile_utils/cpu_utils.cc:114] CPU Frequency: 2499995000 Hz
WARNING:tensorflow:No training configuration found in the save file, so the model was not compiled. Compile it manually.
2021-06-22 15:24:36.638373: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-06-22 15:24:36.638726: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties:
pciBusID: 0000:00:1e.0 name: Tesla T4 computeCapability: 7.5
coreClock: 1.59GHz coreCount: 40 deviceMemorySize: 14.75GiB deviceMemoryBandwidth: 298.08GiB/s
2021-06-22 15:24:36.638857: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-06-22 15:24:36.639260: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-06-22 15:24:36.639587: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-06-22 15:24:36.639628: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-06-22 15:24:36.639641: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264] 0
2021-06-22 15:24:36.639653: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0: N
2021-06-22 15:24:36.639756: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-06-22 15:24:36.640099: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-06-22 15:24:36.640407: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 13803 MB memory) → physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:1e.0, compute capability: 7.5)
WARNING:tensorflow:From /home/ubuntu/sayanti/tf2trt_with_onnx/keras_to_pb_tf2.py:37: convert_variables_to_constants (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.compat.v1.graph_util.convert_variables_to_constants
WARNING:tensorflow:From /home/ubuntu/.local/lib/python3.6/site-packages/tensorflow/python/framework/convert_to_constants.py:857: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.compat.v1.graph_util.extract_sub_graph
Traceback (most recent call last):
File “/home/ubuntu/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py”, line 1375, in _do_call
return fn(*args)
File “/home/ubuntu/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py”, line 1360, in _run_fn
target_list, run_metadata)
File “/home/ubuntu/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py”, line 1453, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.FailedPreconditionError: 2 root error(s) found.
(0) Failed precondition: Could not find variable Block17_5_Branch_0_Conv2d_1x1_BatchNorm/moving_variance. This could mean that the variable has been deleted. In TF1, it can also mean the variable is uninitialized. Debug info: container=localhost, status=Not found: Container localhost does not exist. (Could not find resource: localhost/Block17_5_Branch_0_Conv2d_1x1_BatchNorm/moving_variance)
[[{{node Block17_5_Branch_0_Conv2d_1x1_BatchNorm/moving_variance/Read/ReadVariableOp}}]]
[[Block17_10_Branch_1_Conv2d_0c_7x1/kernel/Read/ReadVariableOp/_25]]
(1) Failed precondition: Could not find variable Block17_5_Branch_0_Conv2d_1x1_BatchNorm/moving_variance. This could mean that the variable has been deleted. In TF1, it can also mean the variable is uninitialized. Debug info: container=localhost, status=Not found: Container localhost does not exist. (Could not find resource: localhost/Block17_5_Branch_0_Conv2d_1x1_BatchNorm/moving_variance)
[[{{node Block17_5_Branch_0_Conv2d_1x1_BatchNorm/moving_variance/Read/ReadVariableOp}}]]
0 successful operations.
0 derived errors ignored.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “conv.py”, line 14, in
input_name, output_node_names = keras_to_pb(model, PB_FILE_PATH, None)
File “/home/ubuntu/sayanti/tf2trt_with_onnx/keras_to_pb_tf2.py”, line 37, in keras_to_pb
output_node_names)
File “/home/ubuntu/.local/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py”, line 337, in new_func
return func(*args, **kwargs)
File “/home/ubuntu/.local/lib/python3.6/site-packages/tensorflow/python/framework/graph_util_impl.py”, line 281, in convert_variables_to_constants
variable_names_denylist=variable_names_blacklist)
File “/home/ubuntu/.local/lib/python3.6/site-packages/tensorflow/python/framework/convert_to_constants.py”, line 1165, in convert_variables_to_constants_from_session_graph
variable_names_denylist=variable_names_denylist))
File “/home/ubuntu/.local/lib/python3.6/site-packages/tensorflow/python/framework/convert_to_constants.py”, line 876, in init
converted_tensors = session.run(tensor_names_to_convert)
File “/home/ubuntu/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py”, line 968, in run
run_metadata_ptr)
File “/home/ubuntu/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py”, line 1191, in _run
feed_dict_tensor, options, run_metadata)
File “/home/ubuntu/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py”, line 1369, in _do_run
run_metadata)
File “/home/ubuntu/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py”, line 1394, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.FailedPreconditionError: 2 root error(s) found.
(0) Failed precondition: Could not find variable Block17_5_Branch_0_Conv2d_1x1_BatchNorm/moving_variance. This could mean that the variable has been deleted. In TF1, it can also mean the variable is uninitialized. Debug info: container=localhost, status=Not found: Container localhost does not exist. (Could not find resource: localhost/Block17_5_Branch_0_Conv2d_1x1_BatchNorm/moving_variance)
[[node Block17_5_Branch_0_Conv2d_1x1_BatchNorm/moving_variance/Read/ReadVariableOp (defined at /home/ubuntu/.local/lib/python3.6/site-packages/keras/engine/base_layer_utils.py:127) ]]
[[Block17_10_Branch_1_Conv2d_0c_7x1/kernel/Read/ReadVariableOp/_25]]
(1) Failed precondition: Could not find variable Block17_5_Branch_0_Conv2d_1x1_BatchNorm/moving_variance. This could mean that the variable has been deleted. In TF1, it can also mean the variable is uninitialized. Debug info: container=localhost, status=Not found: Container localhost does not exist. (Could not find resource: localhost/Block17_5_Branch_0_Conv2d_1x1_BatchNorm/moving_variance)
[[node Block17_5_Branch_0_Conv2d_1x1_BatchNorm/moving_variance/Read/ReadVariableOp (defined at /home/ubuntu/.local/lib/python3.6/site-packages/keras/engine/base_layer_utils.py:127) ]]
0 successful operations.
0 derived errors ignored.

Original stack trace for ‘Block17_5_Branch_0_Conv2d_1x1_BatchNorm/moving_variance/Read/ReadVariableOp’:
File “conv.py”, line 13, in
model = load_model(MODEL_PATH)
File “/home/ubuntu/.local/lib/python3.6/site-packages/keras/saving/save.py”, line 202, in load_model
compile)
File “/home/ubuntu/.local/lib/python3.6/site-packages/keras/saving/hdf5_format.py”, line 181, in load_model_from_hdf5
custom_objects=custom_objects)
File “/home/ubuntu/.local/lib/python3.6/site-packages/keras/saving/model_config.py”, line 59, in model_from_config
return deserialize(config, custom_objects=custom_objects)
File “/home/ubuntu/.local/lib/python3.6/site-packages/keras/layers/serialization.py”, line 163, in deserialize
printable_module_name=‘layer’)
File “/home/ubuntu/.local/lib/python3.6/site-packages/keras/utils/generic_utils.py”, line 672, in deserialize_keras_object
list(custom_objects.items())))
File “/home/ubuntu/.local/lib/python3.6/site-packages/keras/engine/training.py”, line 2332, in from_config
functional.reconstruct_from_config(config, custom_objects))
File “/home/ubuntu/.local/lib/python3.6/site-packages/keras/engine/functional.py”, line 1284, in reconstruct_from_config
process_node(layer, node_data)
File “/home/ubuntu/.local/lib/python3.6/site-packages/keras/engine/functional.py”, line 1232, in process_node
output_tensors = layer(input_tensors, **kwargs)
File “/home/ubuntu/.local/lib/python3.6/site-packages/keras/engine/base_layer_v1.py”, line 745, in call
self._maybe_build(inputs)
File “/home/ubuntu/.local/lib/python3.6/site-packages/keras/engine/base_layer_v1.py”, line 2066, in _maybe_build
self.build(input_shapes)
File “/home/ubuntu/.local/lib/python3.6/site-packages/keras/layers/normalization.py”, line 451, in build
experimental_autocast=False)
File “/home/ubuntu/.local/lib/python3.6/site-packages/keras/engine/base_layer_v1.py”, line 440, in add_weight
caching_device=caching_device)
File “/home/ubuntu/.local/lib/python3.6/site-packages/tensorflow/python/training/tracking/base.py”, line 815, in _add_variable_with_custom_getter
**kwargs_for_getter)
File “/home/ubuntu/.local/lib/python3.6/site-packages/keras/engine/base_layer_utils.py”, line 127, in make_variable
shape=variable_shape if variable_shape else None)
File “/home/ubuntu/.local/lib/python3.6/site-packages/tensorflow/python/ops/variables.py”, line 260, in call
return cls._variable_v1_call(*args, **kwargs)
File “/home/ubuntu/.local/lib/python3.6/site-packages/tensorflow/python/ops/variables.py”, line 221, in _variable_v1_call
shape=shape)
File “/home/ubuntu/.local/lib/python3.6/site-packages/tensorflow/python/ops/variables.py”, line 199, in
previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
File “/home/ubuntu/.local/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py”, line 2626, in default_variable_creator
shape=shape)
File “/home/ubuntu/.local/lib/python3.6/site-packages/tensorflow/python/ops/variables.py”, line 264, in call
return super(VariableMetaclass, cls).call(*args, **kwargs)
File “/home/ubuntu/.local/lib/python3.6/site-packages/tensorflow/python/ops/resource_variable_ops.py”, line 1595, in init
distribute_strategy=distribute_strategy)
File “/home/ubuntu/.local/lib/python3.6/site-packages/tensorflow/python/ops/resource_variable_ops.py”, line 1777, in _init_from_args
value = gen_resource_variable_ops.read_variable_op(handle, dtype)
File “/home/ubuntu/.local/lib/python3.6/site-packages/tensorflow/python/ops/gen_resource_variable_ops.py”, line 485, in read_variable_op
“ReadVariableOp”, resource=resource, dtype=dtype, name=name)
File “/home/ubuntu/.local/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py”, line 750, in _apply_op_helper
attrs=attr_protos, op_def=op_def)
File “/home/ubuntu/.local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py”, line 3565, in _create_op_internal
op_def=op_def)
File “/home/ubuntu/.local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py”, line 2045, in init
self._traceback = tf_stack.extract_stack_for_node(self._c_op)

Sorry for delay!
WIll check and get back to you

Tried with NGC TF docker - nvcr.io/nvidia/tensorflow:19.10-py3 , it works well

Launch docker :
$ nvidia-docker run -it --net=host --ipc=host --publish 0.0.0.0:6006:6006 -v /home/$user/:/home/$user/ --rm nvcr.io/nvidia/tensorflow:19.10-py3

In docker

# pip install keras==2.2.5
# mkdir models
# python convert_to_pb.py            // from https://developer.nvidia.com/blog/speeding-up-deep-learning-inference-using-tensorflow-onnx-and-tensorrt/ 

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.