Hello,
I am using Jetson TX2 with JetPack 3.3, Tensorflow 1.11.0, libnvinfer 4.1.3, Cuda 9.0.
I am trying to create_inference_graph using my saved keras model using the trt.create_inference_graph function according to this tutorial: https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#using-savedmodel
I have imported trt as:
import tensorflow.contrib.tensorrt as trt
As per most of the imports, “from tensorflow.python.compiler.tensorrt import trt_convert” is not available and hence I use “import tensorflow.contrib.tensorrt as trt”
However, I get an error saying: “TypeError: create_inference_graph() got an unexpected keyword argument ‘input_saved_model_dir’”
When I check the trt_convert.py located at":/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/tensorrt/python/, the function definition itself does not take the “input_saved_model_dir”, “input_saved_model_tags” parameters as input: The function from the trt_convert.py is as below:
def create_inference_graph(input_graph_def,
outputs,
max_batch_size=1,
max_workspace_size_bytes=2 << 20,
precision_mode=“FP32”,
minimum_segment_size=3,
is_dynamic_op=False,
maximum_cached_engines=1,
cached_engine_batches=None):
This definition is different the one found here: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/compiler/tensorrt/trt_convert.py
I am not sure what I am missing here. Is it that the version of tensorrt is not updated?
Appreciate some help!