create_inference_graph error

Hello,

I have a pre-trained keras model (MobileNetv2). I follow steps to convert the keras model into a tensorflow graph(.pb) and then reload the graph during inference.

My code looks like this:

import tensorflow as tf
import tensorflow.contrib.tensorrt as trt
import pdb
import os
import os.path as osp
from tensorflow.python.framework import graph_util
from tensorflow.python.framework import graph_io
from tensorflow.keras.models import load_model
from tensorflow.keras import backend as K
from tensorflow.python.framework import tensor_util

...
...
...
...


with gfile.FastGFile("Path/to/.pb/file",'rb') as f:
            graph_def = tf.GraphDef()
            graph_def.ParseFromString(f.read())
            # sess.graph.as_default()
            # g_in = tf.import_graph_def(graph_def)

        output_names = 'import/dense/Softmax:0'

        trt_graph = trt.create_inference_graph(
            input_graph_def=graph_def,
            outputs=output_names,
            max_batch_size=1,
            max_workspace_size_bytes=1 << 15,
            precision_mode='FP16',
            minimum_segment_size=10
        )

        tf.import_graph_def(trt_graph, name='')

        # write to tensorboard (check tensorboard for each op names)
        writer = tf.summary.FileWriter("Path/to/logs/folder")
        writer.add_graph(sess.graph)
        writer.flush()
        writer.close()


        tensor_output = sess.graph.get_tensor_by_name('import/dense/Softmax:0')
        tensor_input = sess.graph.get_tensor_by_name('import/mobilenetv2_1.00_224_input:0')

When I try to import the graph_def I can see the graph on tensorboard and there isn’t any error.
But when I run create_inference_graph as shown above I get the following error:

2019-06-28 09:29:06.387102: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 2942 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X2, pci bus id: 0000:00:00.0, compute capability: 6.2)
2019-06-28 09:29:07.813229: E tensorflow/core/grappler/grappler_item_builder.cc:321] Invalid fetch node name skipping this input
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/grappler/tf_optimizer.py", line 43, in OptimizeGraph
    verbose, graph_id, status)
SystemError: <built-in function TF_OptimizeGraph> returned NULL without setting an error

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "create_inference_graph_v3_inference.py", line 75, in <module>
    inference()
  File "create_inference_graph_v3_inference.py", line 35, in inference
    trt_graph = trt.create_inference_graph(
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/contrib/tensorrt/python/trt_convert.py", line 364, in create_inference_graph
    session_config_with_trt, grappler_meta_graph_def, graph_id=b"tf_graph")
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/grappler/tf_optimizer.py", line 43, in OptimizeGraph
    verbose, graph_id, status)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/errors_impl.py", line 528, in __exit__
    c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Failed to import metagraph, check error log for more info.

I am not sure why the graph extraction is wrong because if I simply do:

g_in = tf.import_graph_def(graph_def)

it works and I don’t see any problem.

I am using:
Jetson TX2 for inference flashed with JetPack 4.2.
Tensorflow 1.13.1
TensorRT 5.0

Looking forward for suggestions.
Appreciate the help! Thank you in advance!

Regards,
T

So… answering my own post here:

This is how it worked for me:
Initially I was generating the frozen graph as follows:

frozen_graph = freeze_session(K.get_session(), output_names=[out.op.name for out in net_model.outputs])

Where, the freeze_session function is as below:

graph = session.graph
    with graph.as_default():
        freeze_var_names = list(set(v.op.name for v in tf.global_variables()).difference(keep_var_names or []))
        output_names = output_names or []
        output_names += [v.op.name for v in tf.global_variables()]
        input_graph_def = graph.as_graph_def()
        frozen_graph = convert_variables_to_constants(session, input_graph_def,
                                                     output_names, freeze_var_names)
        return frozen_graph

The above was not generating the required metagraph data. I changed the code to the following:

tf.keras.backend.set_learning_phase(0)
    net_model = load_model(model_file_path)
    export_path = 'path/to/the/export/directory'
    builder = saved_model_builder.SavedModelBuilder(export_path)
    signature = predict_signature_def(inputs={'images': net_model.input},
                                      outputs={'scores': net_model.output})

    with K.get_session() as sess:
        builder.add_meta_graph_and_variables(sess=sess,
                                             tags=[tag_constants.SERVING],
                                             signature_def_map={'predict': signature})
        builder.save()

builder.save() function creates a directory “variables” with index and data files and the graph of the model as “saved_model.pb”.

Hope this helps someone with similar issues!

Cheers,
T

It did help,thx.

@Tejaswini,
I tried this

export_path = '/content/tfmodel/'
builder = tf.compat.v1.saved_model.builder.SavedModelBuilder(export_path)
signature = tf.compat.v1.saved_model.predict_signature_def(inputs={'images': self.keras_model.input}, outputs={'annots': self.keras_model.output})

with K.get_session() as sess:
  builder.add_meta_graph_and_variables(sess=sess,
                                             tags=[tf.saved_model.tag_constants.SERVING],
                                             signature_def_map={'predict': signature})
  builder.save()

but got error

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/signature_def_utils_impl.py:202: build_tensor_info (from tensorflow.python.saved_model.utils_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.build_tensor_info or tf.compat.v1.saved_model.build_tensor_info.
AttributeError: 'list' object has no attribute 'dtype'

Is there anything to update in this code?

Using, Tensorflow 1.14 (Google Colab).

Got the same issue with " AttributeError: ‘list’ object has no attribute ‘dtype’
" error.
How many inputs and outputs your model has?

To figure out the reason go to file
/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/signature_def_utils_impl.py and modify function build_tensor_info_internal(tensor) as follows - uncomment the commented lines and you will see the tensor type:

def build_tensor_info_internal(tensor):
“”“Utility function to build TensorInfo proto from a Tensor.”""
if (isinstance(tensor, composite_tensor.CompositeTensor) and
not isinstance(tensor, sparse_tensor.SparseTensor)):
return _build_composite_tensor_info_internal(tensor)
#print("!!!")
#print(“Tensor type[”,type(tensor),"]")
#print(“Tensor content”,tensor)
#if “list” in str(type(tensor)):
###print(“This is list”)
###print(“Tensor.dtype”,tensor[0].dtype)
#print("!!!")
tensor_info = meta_graph_pb2.TensorInfo(
dtype=dtypes.as_dtype(tensor.dtype).as_datatype_enum,
tensor_shape=tensor.get_shape().as_proto())
if isinstance(tensor, sparse_tensor.SparseTensor):
tensor_info.coo_sparse.values_tensor_name = tensor.values.name
tensor_info.coo_sparse.indices_tensor_name = tensor.indices.name
tensor_info.coo_sparse.dense_shape_tensor_name = tensor.dense_shape.name
else:
tensor_info.name = tensor.name
return tensor_info

The problem is that def build_tensor_info_internal(tensor) expects the tensor, but it got list as an input. This happens because either you model.input or model.output is a list.

in your case

predict_signature_def(inputs={‘images’: self.keras_model.input}, outputs= {‘annots’: self.keras_model.output})

you should reform the signature dictionary as follows:
inputs={‘input_1’: self.keras_model.input[0], ‘input_2’: self.keras_model.input[0]}

Similarly, if you have a multiple outputs.

Hope this works for you.

Thanks ! This is useful comment.
What do I do next with saved_model.pb file?

I need to input frozen_graph into create_inference_graph function:

trt_graph = trt.create_inference_graph(
input_graph_def=frozen_graph,# frozen model
outputs=“dense_1”,
max_batch_size=2,# specify your max batch size
precision_mode=“FP32”)

so how can I get frozen_graph from saved_model.pb in my target directory?