Hi AastaLLL,
Here’s my code that I’ve been using to convert the model with TF-TRT. I’ve been using TensorFlow 2.1.0 with TensorRT 6 on CUDA 10.2.
import tensorflow as tf
import numpy as np
import cv2
import sys
from tensorflow.python.compiler.tensorrt import trt_convert as trt
conversion_params = trt.DEFAULT_TRT_CONVERSION_PARAMS
conversion_params = conversion_params._replace(max_workspace_size_bytes=(1<<32))
conversion_params = conversion_params._replace(precision_mode="FP16")
conversion_params = conversion_params._replace(maximum_cached_engines=100)
converter = trt.TrtGraphConverterV2(
input_saved_model_dir=sys.argv[1],
conversion_params=conversion_params
)
converter.convert()
def input_fn():
for _ in range(128):
inp = np.random.normal(size=(512, 512, 3)).astype(np.float32)
result, output = cv2.imencode(".jpg", inp)
yield output.tobytes()
converter.build(input=input_fn)
converter.save(sys.argv[2])
Unfortunately, it always gets me an error on my development system of:
2020-02-10 08:00:20.388739: E tensorflow/core/grappler/grappler_item_builder.cc:656] Init node index_to_string/table_init/LookupTableImportV2 doesn't exist in graph
Traceback (most recent call last):
File "run.py", line 15, in <module>
converter.convert()
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/compiler/tensorrt/trt_convert.py", line 980, in convert
frozen_func = convert_to_constants.convert_variables_to_constants_v2(func)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/convert_to_constants.py", line 428, in convert_variables_to_constants_v2
graph_def = _run_inline_graph_optimization(func, lower_control_flow)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/convert_to_constants.py", line 127, in _run_inline_graph_optimization
return tf_optimizer.OptimizeGraph(config, meta_graph)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/grappler/tf_optimizer.py", line 59, in OptimizeGraph
strip_default_attributes)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Failed to import metagraph, check error log for more info.
On my Jetson, it does work; however, it becomes less precise and has significantly lower recall.
Unfortunately, this means for me that using TF-TRT (at least, with default settings) will not work
for my application.
I turned to DeepStream, and while it will work for getting video input and sending the output, I don’t know how to write an nvinfer plugin for a DeepStream inference implementation.
I can provide the model if you wish to take a closer look; it isn’t anything proprietary to me.