Error during conversion of Tensorflow to TensorRT

Hi,

I am converting tensorflow model to tensorRT.

TensorFlow version = 2.3.1
TensorRT Version = 7.1.3.0

Code which I am using for conversion mentioned below.

1. import tensorflow as tf
2. import numpy as np
3. from tensorflow.python.compiler.tensorrt import trt_convert as trt


4. input_saved_model_dir = "./graph-200000/saved_model/"
5. output_saved_model_dir = "./output_trt_model/"

6. conversion_params = trt.DEFAULT_TRT_CONVERSION_PARAMS
7. conversion_params = conversion_params._replace(
8.     max_workspace_size_bytes=(1<<32))
9. conversion_params = conversion_params._replace(precision_mode="FP16")
10. conversion_params = conversion_params._replace(
11.     maximum_cached_engines=100)

12. converter = trt.TrtGraphConverterV2(
13.     input_saved_model_dir=input_saved_model_dir,
14.     conversion_params=conversion_params)
15. converter.convert()
16. converter.save(output_saved_model_dir)

17. def my_input_fn():
18. # Input for a single inference call, for a network that has two input tensors:
19.   inp1 = np.random.normal(size=(8, 16, 16, 3)).astype(np.float32)
20.   inp2 = np.random.normal(size=(8, 16, 16, 3)).astype(np.float32)
21.   yield (inp1, inp2)
22.   #yield (inp1)

23. #print("my_input_fn : ",my_input_fn)
24. #input_fn = (my_input_fn)
25. converter.build(input_fn = my_input_fn)
26. converter.save(output_saved_model_dir)

27. saved_model_loaded = tf.saved_model.load(
28.     output_saved_model_dir, tags=[tag_constants.SERVING])
29. graph_func = saved_model_loaded.signatures[
30.     signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY]
31. frozen_func = convert_to_constants.convert_variables_to_constants_v2(
32.     graph_func)
33. output = frozen_func(input_data)[0].numpy()

Error is :
File “tensorflow_to_trt_2.py”, line25, in
** converter.build(input_fn)**
** File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/compiler/tensorrt/trt_convert.py”, line 1174, in build**
** func(map(ops.convert_to_tensor, inp))*
** File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py”, line 1655, in call**
** return self._call_impl(args, kwargs)**
** File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/wrap_function.py”, line 247, in _call_impl**
** args, kwargs, cancellation_manager)**
** File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py”, line 1673, in _call_impl**
** return self._call_with_flat_signature(args, kwargs, cancellation_manager)**
** File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py”, line 1695, in _call_with_flat_signature**
** len(args)))**
** TypeError: pruned(image_tensor) takes 1 positional arguments but 2 were given**

Please help me where I am doing wrong.

Thanks.

Hi, Request you to share the ONNX model and the script so that we can assist you better.

Alongside you can try validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).

Alternatively, you can try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec

Thanks!

Hi @NVES,
Thanks for the reply,

I have 2 file in input folder i.e. save_models.pd and save_models.prototxt and the script that I am using is mentioned below.

1. import tensorflow as tf
2. import numpy as np
3. from tensorflow.python.compiler.tensorrt import trt_convert as trt


4. input_saved_model_dir = "./graph-200000/saved_model/"
5. output_saved_model_dir = "./output_trt_model/"

6. conversion_params = trt.DEFAULT_TRT_CONVERSION_PARAMS
7. conversion_params = conversion_params._replace(
8.     max_workspace_size_bytes=(1<<32))
9. conversion_params = conversion_params._replace(precision_mode="FP16")
10. conversion_params = conversion_params._replace(
11.     maximum_cached_engines=100)

12. converter = trt.TrtGraphConverterV2(
13.     input_saved_model_dir=input_saved_model_dir,
14.     conversion_params=conversion_params)
15. converter.convert()
16. converter.save(output_saved_model_dir)

17. def my_input_fn():
18. # Input for a single inference call, for a network that has two input tensors:
19.   inp1 = np.random.normal(size=(8, 16, 16, 3)).astype(np.float32)
20.   inp2 = np.random.normal(size=(8, 16, 16, 3)).astype(np.float32)
21.   yield (inp1, inp2)
22.   #yield (inp1)

23. #print("my_input_fn : ",my_input_fn)
24. #input_fn = (my_input_fn)
25. converter.build(input_fn = my_input_fn)
26. converter.save(output_saved_model_dir)

27. saved_model_loaded = tf.saved_model.load(
28.     output_saved_model_dir, tags=[tag_constants.SERVING])
29. graph_func = saved_model_loaded.signatures[
30.     signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY]
31. frozen_func = convert_to_constants.convert_variables_to_constants_v2(
32.     graph_func)
33. output = frozen_func(input_data)[0].numpy()

Thanks…

Hi @Pritam,

The error message says that the model needs only one arg, but the build function provided two. During build mode we run inference with the converted model. To do so we need input data that model would accept. For example, if your not converted TF model would need a single input, tensor_a then you need to provide a tensor (or numpy array) with the same shape.

def my_input_fn():
    yield (tensor_a,) 
  • Do not forget the comma! We need to provide a tuple of input tensors even if there is only a single input

The actual values do not matter here, important thing here is tensor_a.shape compatibility and what network expects. In your code my_input_fn provides two tensors, that means network is called with two inputs. The error message tells that the network only accepts one input.

Thank you.