Keras .pb model to tensorrt engine conversion

I have a keras .pb model which i trained with Tensorflow 1.15.0 and Keras 2.3.1. Could you please guide me to convert this model to tensorrt engine?

Hi,

You can convert using onnx model (pb → onnx → trt)

or

Also you can convert pb → trt using tf-trt.

Thank you.

Hi @spolisetty ,

Thanks for the reply. I was trying to convert with tf-trt. I ran the code,

from tensorflow.python.compiler.tensorrt import trt_convert as trt

input_saved_model_dir = "my_model.pb" 
output_saved_model_dir = "my_model.engine"

converter = trt.TrtGraphConverter(input_saved_model_dir=input_saved_model_dir)
converter.convert()
converter.save(output_saved_model_dir)

And i am getting this error,

    converter.convert()
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/compiler/tensorrt/trt_convert.py", line 548, in convert
    self._convert_saved_model()
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/compiler/tensorrt/trt_convert.py", line 494, in _convert_saved_model
    self._input_saved_model_dir)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/util/deprecation.py", line 324, in new_func
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/saved_model/loader_impl.py", line 268, in load
    loader = SavedModelLoader(export_dir)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/saved_model/loader_impl.py", line 284, in __init__
    self._saved_model = parse_saved_model(export_dir)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/saved_model/loader_impl.py", line 83, in parse_saved_model
    constants.SAVED_MODEL_FILENAME_PB))
OSError: SavedModel file does not exist at: my_model.pb/{saved_model.pbtxt|saved_model.pb}

the name of the model is correct and it’s there in the path…
Any guess why i am getting this error?

Hi,

Hope following similar issue may help you.

Thank you.

Hi @spolisetty ,

Thanks… I have checked it… i have tried the things people suggested there… i am trying to convert in Nvidia TF 1.15 container… so the dependencies like h5py is already installed… and the variable input_saved_model_dir, does it expect a directory? or the full path of the .pb file? in my case i only have the .pb model file… there are no other associated files… so what i gave is the absolute path of the .pb file for the input_saved_model_dir variable…
it’s throwing the error,
OSError: SavedModel file does not exist at: my_model.pb/{saved_model.pbtxt|saved_model.pb}

@jazeel.jk did you solve the problem can you help me please ?

No it’s not resolved yet @sametyassine88

Have you tried both ways ?

Yes i tried both… As i said i have only a .pb file… so i tried to give the absolute path, and also placed it in a folder and gave the folder path… both didn’t work

Hi @jazeel.jk,

We recommend you to please post your concern on Tensorflow forum to get better help.

Thank you.

@jazeel.jk when you solve the problem please tell me
and if i solve it , i will tell you

sure @sametyassine88 … i have created a topic in Tensorflow forum like @spolisetty suggested… But there is no reply yet…

i think the method tried,

from tensorflow.python.compiler.tensorrt import trt_convert as trt
input_saved_model_dir = "my_model.pb" 
output_saved_model_dir = "my_model.engine"
converter = trt.TrtGraphConverter(input_saved_model_dir=input_saved_model_dir)
converter.convert()
converter.save(output_saved_model_dir)

is expecting a directory which contains an asset folder, variables folder and saved model file… in my case the pb file is a frozen model… which means we should use the second method mentioned in the documentation, to convert the frozen graph to tensorrt engine…

so i tried to read the frozen graph from the frozen model,

from tensorflow.python.compiler.tensorrt import trt_convert as trt
import keras
import tensorflow as tf

input_saved_model_dir = "my_model.pb" 
output_saved_model_dir = "my_model.engine"
model = keras.models.load_model(input_saved_model_dir, compile=False)

graph = tf.Graph()
sess = tf.InteractiveSession(graph = graph)
with tf.gfile.GFile(input_saved_model_dir, 'rb') as f:
    graph_def = tf.GraphDef()
    graph_def.ParseFromString(f.read())

converter = trt.TrtGraphConverter(input_graph_def=graph_def)
frozen_graph = converter.convert()

and i am getting the error,

Traceback (most recent call last):
  File "tf_trt_convert.py", line 18, in <module>
    frozen_graph = converter.convert()
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/compiler/tensorrt/trt_convert.py", line 546, in convert
    self._convert_graph_def()
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/compiler/tensorrt/trt_convert.py", line 475, in _convert_graph_def
    self._run_conversion()
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/compiler/tensorrt/trt_convert.py", line 453, in _run_conversion
    graph_id=b"tf_graph")
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/grappler/tf_optimizer.py", line 41, in OptimizeGraph
    verbose, graph_id)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Failed to import metagraph, check error log for more info.

here is where i am @sametyassine88

@sametyassine88, @jazeel.jk

Meanwhile, in case you find this as helpful.

Thanks @spolisetty … yea i used the frozen graph loading method from this article only…

with tf.gfile.GFile(model_filepath, 'rb') as f:
            graph_def = tf.GraphDef()
            graph_def.ParseFromString(f.read())

then tried to convert that frozen graph with,

converter = trt.TrtGraphConverter(input_graph_def=graph_def)
frozen_graph = converter.convert()

and i was getting the error,
tensorflow.python.framework.errors_impl.InvalidArgumentError: Failed to import metagraph, check error log for more info.

Thanks for the confirmation. Please followup in Tensorflow forum for further assistance.