output node option in Tensorflow to TensoRT

Conversion from Tensorflow to TensorRT
import tensorflow as tf
import tensorflow.contrib.tensorrt as trt

Inference with TF-TRT frozen graph workflow:

graph = tf.Graph()
with graph.as_default():
with tf.Session() as sess:
# First deserialize your frozen graph:
with tf.gfile.GFile(“frozen_inference_graph.pb”, ‘rb’) as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
# Now you can create a TensorRT inference graph from your
# frozen graph:
trt_graph = trt.create_inference_graph(
input_graph_def=graph_def,
outputs=[“output_list”],
max_batch_size=1,
max_workspace_size_bytes=2000,
precision_mode=“FP16”)
# Import the TensorRT graph into a new graph and run:
output_node = tf.import_graph_def(
trt_graph,
return_elements=[“output”])
sess.run(output_node)

I am using tensorflow ssd mobilenet model. Can i know what outputs list parameter i should use.

Hello,

I’d recommend following this project for converting SSD MobileNet models.

There are some graph modifications that must be applied to the models before TF-TRT conversion will work. This project handles that.

Best,
John