Getting "failed to import metagraph" when running the TRT saved_model converter workflow for TF 1.15

I’m getting the following error when trying to quantize a saved_model.pb to INT8 with the TRT graph converter for TF 1.15:

python3 quantize.py
2020-10-17 14:35:50.698950: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-10-17 14:35:50.709849: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7ff13ff9bcc0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-10-17 14:35:50.709866: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
WARNING:tensorflow:From /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/compiler/tensorrt/trt_convert.py:494: load (from tensorflow.python.saved_model.loader_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.loader.load or tf.compat.v1.saved_model.load. There will be a new function for importing SavedModels in Tensorflow 2.0.
WARNING:tensorflow:From /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/compiler/tensorrt/trt_convert.py:517: convert_variables_to_constants (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.compat.v1.graph_util.convert_variables_to_constants
WARNING:tensorflow:From /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/framework/graph_util_impl.py:277: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.compat.v1.graph_util.extract_sub_graph
2020-10-17 14:35:52.822243: I tensorflow/core/grappler/devices.cc:60] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0 (Note: TensorFlow was not compiled with CUDA support)
2020-10-17 14:35:52.822302: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session
2020-10-17 14:35:52.823566: E tensorflow/core/grappler/grappler_item_builder.cc:423] Failed to detect the fetch node(s), skipping this input
Traceback (most recent call last):
File “quantize.py”, line 13, in
converter.convert()
File “/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/compiler/tensorrt/trt_convert.py”, line 548, in convert
self._convert_saved_model()
File “/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/compiler/tensorrt/trt_convert.py”, line 536, in _convert_saved_model
self._run_conversion()
File “/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/compiler/tensorrt/trt_convert.py”, line 453, in _run_conversion
graph_id=b"tf_graph")
File “/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/grappler/tf_optimizer.py”, line 41, in OptimizeGraph
verbose, graph_id)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Failed to import metagraph, check error log for more info.

I saw there was a post regarding the same issue, but it was never given an actual answer. I’m wondering if this is a problem with the way I’m going about doing the conversion, or if there’s a problem with the saved_model.pb file itself. Furthermore, where can I check the “error log” that the output talks about? I’m hoping there could be some helpful clues in there.

There’s a fair amount of discrepancies between my own development environment and the one the used to develop the saved_model file (such as the OS), I’m guessing this could also have something to do with it? However, I’m making sure to use the same version of tensorflow the model was developed in.

I’m running the following code to quantize the model:

converter = trt.TrtGraphConverter(
input_saved_model_dir=input_saved_model_dir,
max_workspace_size_bytes=(11<32),
precision_mode=‘INT8’,
maximum_cached_engines=100)
converter.convert()
converter.save(output_saved_model_dir)

Any help is appreciated,
Thanks.

Environment

TensorRT Version: 7.2
Operating System + Version: MacOS 10.15.6
Python Version (if applicable): 3.7.4
TensorFlow Version (if applicable): 1.15

Hi @cameronmeissner,
Could you please share the model and script files to reproduce the issue so we can help better?

Thanks