Provide details on the platforms you are using:
Linux distro and version:ubuntu 16.04
GPU type :gtx 1080
nvidia driver version :390.25
CUDA version :9.0
CUDNN version :7.2.1
Python version [if using python] :3.5
Tensorflow version :1.11
TensorRT version :4.0.1.6
If Jetson, OS, hw versions
Describe the problem
here is my code:
#net.net = (1,4,70,192)
cnn_out = net.net
cnn_output_shape = tf.shape(cnn_out)
batch_size = cnn_output_shape[0]
cnn_output_h = cnn_output_shape[1]
cnn_output_w = cnn_output_shape[2]
cnn_output_channel = cnn_output_shape[3]
cnn_out_transposed = tf.transpose(cnn_out, [0, 2, 1, 3], name='f_t')
#cnn_out_transposed = (1,70,4,192)
cnn_out_reshaped = tf.reshape(cnn_out_transposed,
[batch_size, cnn_output_w, 1, cnn_output_h * cnn_output_channel],
name='f_r')
cnn_shape = cnn_out.get_shape().as_list()
cnn_out_reshaped.set_shape([cnn_shape[0], cnn_shape[2], 1, cnn_shape[1] * cnn_shape[3]])
# cnn_out_reshaped = (1,70,1,768)
# weights = (1,1,768,7180)=5514240
logits = slim.conv2d(cnn_out_reshaped, 7180, [1, 1], activation_fn=None)
# logits=(1,70,1,7180)
logits = tf.squeeze(logits, [2])
# logits=(1,70,7180)
probs = tf.nn.softmax(logits, name='probs')
I use convert-to-uff to convert .pb to .uff successfuly,but when I execute this code
model_file = '../output/crnn.pb'
uff_model = uff.from_tensorflow_frozen_model(model_file, ["probs"], list_nodes=False, quiet=False,
input_node=['inputs,inputs,float32,1,32,280'])
It is throw errors below:
[TensorRT] ERROR(code line 21): Conv/Conv2D: kernel weights has count 5514240 but 96499200 was expected
[TensorRT] ERROR: UFFParser: Parser error: Conv/BiasAdd: The input to the Scale Layer is required to have a minimum of 3 dimensions.
[TensorRT] ERROR: Failed to parse UFF model stream
the 96499200 = 701927180,I think TensorRT reshape op could compute wrong result!