Converting TF 2 Object Detection Model to TensorRT

I am trying to convert the EfficientDet D1 640x640 of the new Tensorflow 2 Object Detection API to a TensorRT Model to run on my Jetson AGX Board.

I am running the following Code:

import tensorflow as tf
import numpy as np
from tensorflow.python.compiler.tensorrt import trt_convert as trt

input_saved_model_dir = './efficientdet_d1_coco17_tpu-32/saved_model/'
output_saved_model_dir = './models/tensorRT/'
num_runs = 2

conversion_params = trt.DEFAULT_TRT_CONVERSION_PARAMS
conversion_params = conversion_params._replace(max_workspace_size_bytes=(1<<32))
conversion_params = conversion_params._replace(precision_mode="FP16")
# conversion_params = conversion_params._replace(maximum_cached_engiens=100)

converter = trt.TrtGraphConverterV2(input_saved_model_dir=input_saved_model_dir,conversion_params=conversion_params)

def my_input_fn():
    for _ in range(num_runs):
        inp1 = np.random.normal(size=(1, 640, 640, 3)).astype(np.uint8)
        yield inp1

I am getting the following Error:

InvalidArgumentError                      Traceback (most recent call last)
<ipython-input-5-d7c3941a6051> in <module>
      7         yield inp1
----> 9

/projects/sebschaefer/venv/tf22gpu_copy/lib/python3.6/site-packages/tensorflow/python/compiler/tensorrt/ in build(self, input_fn)
   1172       if not first_input:
   1173         first_input = inp
-> 1174       func(*map(ops.convert_to_tensor, inp))
   1176     if self._need_trt_profiles:

/projects/sebschaefer/venv/tf22gpu_copy/lib/python3.6/site-packages/tensorflow/python/eager/ in __call__(self, *args, **kwargs)
   1603       TypeError: For invalid positional/keyword argument combinations.
   1604     """
-> 1605     return self._call_impl(args, kwargs)
   1607   def _call_impl(self, args, kwargs, cancellation_manager=None):

/projects/sebschaefer/venv/tf22gpu_copy/lib/python3.6/site-packages/tensorflow/python/eager/ in _call_impl(self, args, kwargs, cancellation_manager)
   1643       raise TypeError("Keyword arguments {} unknown. Expected {}.".format(
   1644           list(kwargs.keys()), list(self._arg_keywords)))
-> 1645     return self._call_flat(args, self.captured_inputs, cancellation_manager)
   1647   def _filtered_call(self, args, kwargs):

/projects/sebschaefer/venv/tf22gpu_copy/lib/python3.6/site-packages/tensorflow/python/eager/ in _call_flat(self, args, captured_inputs, cancellation_manager)
   1744       # No tape is watching; skip to running the function.
   1745       return self._build_call_outputs(
-> 1746           ctx, args, cancellation_manager=cancellation_manager))
   1747     forward_backward = self._select_forward_and_backward_functions(
   1748         args,

/projects/sebschaefer/venv/tf22gpu_copy/lib/python3.6/site-packages/tensorflow/python/eager/ in call(self, ctx, args, cancellation_manager)
    596               inputs=args,
    597               attrs=attrs,
--> 598               ctx=ctx)
    599         else:
    600           outputs = execute.execute_with_cancellation(

/projects/sebschaefer/venv/tf22gpu_copy/lib/python3.6/site-packages/tensorflow/python/eager/ in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
     58     ctx.ensure_initialized()
     59     tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
---> 60                                         inputs, attrs, num_outputs)
     61   except core._NotOkStatusException as e:
     62     if name is not None:

InvalidArgumentError: 2 root error(s) found.
  (0) Invalid argument:  Input shapes do not match input partial shapes stored in graph, for TRTEngineOp_5: [[640,640,3]] != [[1,?,?,3]]
	 [[node TRTEngineOp_5 (defined at <ipython-input-5-d7c3941a6051>:2) ]]
  (1) Invalid argument:  Input shapes do not match input partial shapes stored in graph, for TRTEngineOp_5: [[640,640,3]] != [[1,?,?,3]]
	 [[node TRTEngineOp_5 (defined at <ipython-input-5-d7c3941a6051>:2) ]]
0 successful operations.
0 derived errors ignored. [Op:__inference_pruned_177388]

Function call stack:
pruned -> pruned

The error message says that it expects an input shape of [1,?,?,3], but get’s [640,640,3]. But I am passing the array of size (1,640,640,3), which should be correct. But fore some reason it doesn’t seem to work.

The Model I am using is the pretrained model from the Tensorflow 2.0 Object Detection Api (

Thanks in advance for your help!


Sorry for the late update.

The input tensor should match like this:

    inputs = keras.Input(shape=(640,640,3,))

def my_input_fn():
  for _ in range(100):
    inp1 = np.random.normal(size=(1,640,640,3)).astype(np.float32)
    yield inp1,

You can find a sample from our document here: