I build a very simple graph using tensorflow:
input_image = tf.placeholder(dtype=tf.float32, shape=(1, None, None, 3), name=‘image_tensor’)
conv1 = slim.conv2d(input_image, 16, [3,3],2)
conv2 = slim.conv2d(conv1, 16, [3,3],2)
conv1 = tf.reshape(conv1, [1,-1,16])
conv2 = tf.reshape(conv2, [1,-1,16])
result = tf.concat([conv1, conv2], 1)
When I convert the model to uff and import the uff, however, it failed
[TensorRT] INFO: UFFParser: parsing image_tensor
[TensorRT] INFO: UFFParser: parsing Conv/weights
[TensorRT] INFO: UFFParser: parsing Conv/Conv2D
[TensorRT] INFO: UFFParser: Convolution: add Padding Layer to support asymmetric padding
[TensorRT] INFO: UFFParser: Convolution: Left: 0
[TensorRT] INFO: UFFParser: Convolution: Right: 1
[TensorRT] INFO: UFFParser: Convolution: Top: 0
[TensorRT] INFO: UFFParser: Convolution: Bottom: 1
[TensorRT] INFO: UFFParser: parsing Conv/biases
[TensorRT] INFO: UFFParser: parsing Conv/BiasAdd
[TensorRT] INFO: UFFParser: parsing Conv/Relu
[TensorRT] INFO: UFFParser: parsing Reshape/shape
[TensorRT] INFO: UFFParser: parsing Reshape
[TensorRT] INFO: UFFParser: parsing Conv_1/weights
[TensorRT] INFO: UFFParser: parsing Conv_1/Conv2D
[TensorRT] INFO: UFFParser: Convolution: add Padding Layer to support asymmetric padding
[TensorRT] INFO: UFFParser: Convolution: Left: 0
[TensorRT] INFO: UFFParser: Convolution: Right: 1
[TensorRT] INFO: UFFParser: Convolution: Top: 0
[TensorRT] INFO: UFFParser: Convolution: Bottom: 1
[TensorRT] INFO: UFFParser: parsing Conv_1/biases
[TensorRT] INFO: UFFParser: parsing Conv_1/BiasAdd
[TensorRT] INFO: UFFParser: parsing Conv_1/Relu
[TensorRT] INFO: UFFParser: parsing Reshape_1/shape
[TensorRT] INFO: UFFParser: parsing Reshape_1
[TensorRT] INFO: UFFParser: parsing concat
[TensorRT] INFO: UFFParser: parsing MarkOutput_0
[TensorRT] ERROR: concat: all concat inputs must have same h and w
[TensorRT] ERROR: Failed to create engine
The input of the concat have same shape except the axis 1. It seems that reshape does not change the shape in tensorrt.
Thanks
Hi,
We can parse your network correctly.
Could you recheck it?
Here is our source for your reference:
import tensorflow.contrib.slim as slim
import tensorflow as tf
import uff
inputs = tf.placeholder(dtype=tf.float32, shape=(1, None, None, 3))
conv1 = slim.conv2d(inputs, 16, [3,3],2)
conv2 = slim.conv2d(conv1, 16, [3,3],2)
conv1 = tf.reshape(conv1, [1,-1,16])
conv2 = tf.reshape(conv2, [1,-1,16])
result = tf.concat([conv1, conv2], 1, name='output')
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
graphdef = tf.get_default_graph().as_graph_def()
frozen_graph = tf.graph_util.convert_variables_to_constants(sess, graphdef, ['output'])
tf_model = tf.graph_util.remove_training_nodes(frozen_graph)
uff_model = uff.from_tensorflow(tf_model, ["output"])
Thanks
AastaLLL:
Hi,
We can parse your network correctly.
Could you recheck it?
Here is our source for your reference:
import tensorflow.contrib.slim as slim
import tensorflow as tf
import uff
inputs = tf.placeholder(dtype=tf.float32, shape=(1, None, None, 3))
conv1 = slim.conv2d(inputs, 16, [3,3],2)
conv2 = slim.conv2d(conv1, 16, [3,3],2)
conv1 = tf.reshape(conv1, [1,-1,16])
conv2 = tf.reshape(conv2, [1,-1,16])
result = tf.concat([conv1, conv2], 1, name='output')
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
graphdef = tf.get_default_graph().as_graph_def()
frozen_graph = tf.graph_util.convert_variables_to_constants(sess, graphdef, ['output'])
tf_model = tf.graph_util.remove_training_nodes(frozen_graph)
uff_model = uff.from_tensorflow(tf_model, ["output"])
Thanks
Hi AastaLLL:
The error occurs when you parser it. The code is as below:
import tensorflow.contrib.slim as slim
import tensorflow as tf
import uff
try:
import tensorrt as trt
from tensorrt.parsers import uffparser
except ImportError as err:
sys.stderr.write("""ERROR: failed to import module ({})
Please make sure you have the TensorRT Library installed
and accessible in your LD_LIBRARY_PATH
""".format(err))
exit(1)
MAX_WORKSPACE = 1 << 30
MAX_BATCHSIZE = 1
G_LOGGER = trt.infer.ConsoleLogger(trt.infer.LogSeverity.INFO)
inputs = tf.placeholder(dtype=tf.float32, shape=(1, None, None, 3), name='image_tensor')
conv1 = slim.conv2d(inputs, 16, [3,3],2)
conv2 = slim.conv2d(conv1, 16, [3,3],2)
conv1 = tf.reshape(conv1, [1,-1,16])
conv2 = tf.reshape(conv2, [1,-1,16])
result = tf.concat([conv1, conv2], 1, name='output')
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
graphdef = tf.get_default_graph().as_graph_def()
frozen_graph = tf.graph_util.convert_variables_to_constants(sess, graphdef, ['output'])
tf_model = tf.graph_util.remove_training_nodes(frozen_graph)
uff_model = uff.from_tensorflow(tf_model, ["output"])
parser = uffparser.create_uff_parser()
parser.register_input("image_tensor", (3, 300, 300), 0)
parser.register_output('output')
engine = trt.utils.uff_to_trt_engine(G_LOGGER,
uff_model,
parser,
MAX_BATCHSIZE,
MAX_WORKSPACE)
Hi,
Thanks for your feedback.
We are checking this internally. Will update information with you later.
Thanks.
Hi,
For TensorRT 3 RC, reshape only apply on constant weights.
Tensor reshapes will drop when importing UFF model to a TensorRT engine.
This issue is feedbacked and will be prioritized internally.
If the concat layer is the end of your network, the most straightforward WAR is to register multiple outputs for conv1 and conv2.
Thanks and sorry for the inconvenience.