TF-TRT graph conversion failed for Tensorflow version 1


I’m trying build tensorflow optimised model with TF-TRT path. Throwing the below error.

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/", line 501, in _import_graph_def_internal
    graph._c_graph, serialized, options)  # pylint: disable=protected-access
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input 1 of node StatefulPartitionedCall was passed float from Conv1/kernel:0 incompatible with expected resource.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "", line 17, in <module>'mobilenetv2'+'_'+PRECISION)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/compiler/tensorrt/", line 713, in save
    importer.import_graph_def(self._converted_graph_def, name="")
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/util/", line 513, in new_func
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/", line 405, in import_graph_def
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/", line 505, in _import_graph_def_internal
    raise ValueError(str(e))
ValueError: Input 1 of node StatefulPartitionedCall was passed float from Conv1/kernel:0 incompatible with expected resource.


TensorRT Version :
GPU Type : Jetson nano
Nvidia Driver Version : CUDA Driver Version: 10.2
CUDA Version : cuda-toolkit-10-2 (= 10.2.460-1)
CUDNN Version : cuDNN Version: 8.2
Operating System + Version : Ubuntu 18.04(l4t with jetpack)
Python Version (if applicable) : 3.6.9
TensorFlow Version (if applicable) : '1.15.5’
PyTorch Version (if applicable) :
Baremetal or Container (if container which image + tag) :

Relevant Files

-----model saving code----

from matplotlib import pyplot as plt
import numpy as np
import tensorflow as tf

file = tf.keras.utils.get_file(
img = tf.keras.preprocessing.image.load_img(file, target_size=[224, 224])
x = tf.keras.preprocessing.image.img_to_array(img)
x = tf.keras.applications.mobilenet_v2.preprocess_input(x[tf.newaxis,...])

labels_path = tf.keras.utils.get_file(
imagenet_labels = np.array(open(labels_path).read().splitlines())
print('Imagenet labels: ',imagenet_labels)

pretrained_model = tf.keras.applications.MobileNetV2()
result_before_save = pretrained_model(x)
print('Result before save: ',result_before_save)

#decoded = imagenet_labels[np.argsort(result_before_save)[0,::-1][:5]+1]

#print("Result before saving:\n", decoded)

mobilenet_saved_path = ('tensorrt/tf_trt'), mobilenet_saved_path)


import tensorflow as tf
from tensorflow.keras.applications import MobileNetV2
import numpy as np
from tensorflow.python.compiler.tensorrt import trt_convert as tf_trt


#we generate a dummy batch of data to pass into the network just to get an understanding of its performance. 
#This is normally where you would supply a numpy batch of images.
dummy_input_batch = np.zeros((BATCH_SIZE, 224, 224, 3))


tf_trt_converter = tf_trt.TrtGraphConverter(input_saved_model_dir='/home/aitrios/tensorrt/tf_trt',precision_mode=PRECISION)

Steps To Reproduce

Kindly refer the code snipped attached.

We recommend you to check the below samples links in case of tf-trt integration issues.

If issue persist, We recommend you to reach out to Tensorflow forum.