TF-TRT ERROR:tensorflow.python.framework.errors_impl.InvalidArgumentError: Failed to import metagraph, check error log for more info.

I use TF-TRT to convert my tensorflow savedmodel.
But I blocked by some error :

2019-11-02 08:52:55.449262: E tensorflow/core/grappler/grappler_item_builder.cc:330] Failed to detect the fetch node(s), skipping this input
Traceback (most recent call last):
  File "./trt6_convert_save.py", line 12, in <module>
    converter.convert()
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/compiler/tensorrt/trt_convert.py", line 300, in convert
    self._convert_saved_model()
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/compiler/tensorrt/trt_convert.py", line 287, in _convert_saved_model
    self._run_conversion()
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/compiler/tensorrt/trt_convert.py", line 204, in _run_conversion
    graph_id=b"tf_graph")
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/grappler/tf_optimizer.py", line 41, in OptimizeGraph
    verbose, graph_id)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Failed to import metagraph, check error log for more info.

my image is nvidia/tensorflow:19.10-py3
The following is my code.

import os
import tensorflow as tf

input_saved_model_dir="/tf_savedmodel"
output_saved_model_dir="/tftrt_savedmodel"

from tensorflow.python.compiler.tensorrt import trt_convert as trt
converter = trt.TrtGraphConverter(input_saved_model_dir=input_saved_model_dir)
converter.convert()
converter.save(output_saved_model_dir)

There seems to be something wrong with the input graph.

Perhaps the input graph doesn’t have any output tensor?

How did you make the input saved_model?

Could you post the model here?

Hi Pooya-Davoodi,I have post my model at the attachment.

And my model is trained with tensorflow 1.12 version

Have you resolved this issue? I am having the same problem

2 Likes

I’am using tf 1.15 and i am trying to convert a frozen graph.pb from TensorFlow to TensoRT so i used this code :

"""
        TensorFlow to TensorRT converter with TensorFlow 1.15
        Workflow with a fozen graph

"""

            import tensorflow as tf
            from tensorflow.python.compiler.tensorrt import trt_convert as trt

            with tf.compat.v1.Session() as sess:
                # First deserialize your frozen graph:
                with tf.io.gfile.GFile("tensorflow-yolo-v3/frozen_darknet_yolov3_model.pb", 'rb') as f:
                    frozen_graph = tf.compat.v1.GraphDef()
                    frozen_graph.ParseFromString(f.read())
                    # Now you can create a TensorRT inference graph from your
                    # frozen graph:
                converter = trt.TrtGraphConverter(
            	    input_graph_def=frozen_graph,
            	    nodes_blacklist=['output_boxes']) #output nodes
                trt_graph = converter.convert()
                # Import the TensorRT graph into a new graph and run:
                output_node = tf.import_graph_def(
                    trt_graph,
                    return_elements=['output_boxes'])
                sess.run(output_node)

But after executing this code, i am having this OUTPUT :

2021-04-22 11:03:18.950576: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version

2021-04-22 11:03:19.062426: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1049] ARM64 does not support NUMA - returning NUMA node zero

2021-04-22 11:03:19.063453: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x3c675d0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:

2021-04-22 11:03:19.063533: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Xavier, Compute Capability 7.2

2021-04-22 11:03:19.064088: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1049] ARM64 does not support NUMA - returning NUMA node zero

2021-04-22 11:03:19.064251: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1665] Found device 0 with properties:

name: Xavier major: 7 minor: 2 memoryClockRate(GHz): 1.377

pciBusID: 0000:00:00.0

2021-04-22 11:03:19.064339: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.2

2021-04-22 11:03:19.064410: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.10

2021-04-22 11:03:19.064464: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10

2021-04-22 11:03:19.064524: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10

2021-04-22 11:03:19.064570: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10

2021-04-22 11:03:19.064615: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.10

2021-04-22 11:03:19.064687: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8

2021-04-22 11:03:19.064831: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1049] ARM64 does not support NUMA - returning NUMA node zero

2021-04-22 11:03:19.064982: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1049] ARM64 does not support NUMA - returning NUMA node zero

2021-04-22 11:03:19.065059: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1793] Adding visible gpu devices: 0

2021-04-22 11:03:19.065191: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.2

2021-04-22 11:03:23.813585: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1206] Device interconnect StreamExecutor with strength 1 edge matrix:

2021-04-22 11:03:23.813689: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1212] 0

2021-04-22 11:03:23.813714: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1225] 0: N

2021-04-22 11:03:23.814137: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1049] ARM64 does not support NUMA - returning NUMA node zero

2021-04-22 11:03:23.814397: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1049] ARM64 does not support NUMA - returning NUMA node zero

2021-04-22 11:03:23.814639: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1351] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 27313 MB memory) -> physical GPU (device: 0, name: Xavier, pci bus id: 0000:00:00.0, compute capability: 7.2)

2021-04-22 11:03:25.384551: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libnvinfer.so.7

2021-04-22 11:03:28.272454: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1049] ARM64 does not support NUMA - returning NUMA node zero

2021-04-22 11:03:28.272693: I tensorflow/core/grappler/devices.cc:55] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 1

2021-04-22 11:03:28.272950: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session

2021-04-22 11:03:28.273960: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1049] ARM64 does not support NUMA - returning NUMA node zero

2021-04-22 11:03:28.274086: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1665] Found device 0 with properties:

name: Xavier major: 7 minor: 2 memoryClockRate(GHz): 1.377

pciBusID: 0000:00:00.0

2021-04-22 11:03:28.274152: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.2

2021-04-22 11:03:28.274233: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.10

2021-04-22 11:03:28.274301: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10

2021-04-22 11:03:28.274367: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10

2021-04-22 11:03:28.274416: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10

2021-04-22 11:03:28.274459: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.10

2021-04-22 11:03:28.274497: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8

2021-04-22 11:03:28.274626: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1049] ARM64 does not support NUMA - returning NUMA node zero

2021-04-22 11:03:28.274810: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1049] ARM64 does not support NUMA - returning NUMA node zero

2021-04-22 11:03:28.274946: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1793] Adding visible gpu devices: 0

2021-04-22 11:03:28.275014: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1206] Device interconnect StreamExecutor with strength 1 edge matrix:

2021-04-22 11:03:28.275039: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1212] 0

2021-04-22 11:03:28.275062: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1225] 0: N

2021-04-22 11:03:28.275212: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1049] ARM64 does not support NUMA - returning NUMA node zero

2021-04-22 11:03:28.275409: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1049] ARM64 does not support NUMA - returning NUMA node zero

2021-04-22 11:03:28.275503: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1351] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 27313 MB memory) -> physical GPU (device: 0, name: Xavier, pci bus id: 0000:00:00.0, compute capability: 7.2)

2021-04-22 11:03:31.784527: I tensorflow/compiler/tf2tensorrt/segment/segment.cc:486] There are 10 ops of 5 different types in the graph that are not converted to TensorRT: ResizeNearestNeighbor, ConcatV2, SplitV, NoOp, Placeholder, (For more information see https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#supported-ops).

2021-04-22 11:03:32.017294: I tensorflow/compiler/tf2tensorrt/convert/convert_graph.cc:647] Number of TensorRT candidate segments: 6

2021-04-22 11:03:32.467664: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libnvinfer.so.7

2021-04-22 11:03:32.706683: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libnvinfer_plugin.so.7

Killed

Anyone has an idea to avoid the “Killed” ?