This issue is directly linked to this topic: https://devtalk.nvidia.com/default/topic/1036940/jetson-tx2/uff-parser-errors but as I have not gotten a response for over a month, I am hoping that providing a simpler model that demonstrates the issue could speed up the process. Here is my network model:
def WeightsVariable(shape):
return tf.Variable(tf.truncated_normal(shape, stddev=0.1, name='weights'))
def BiasVariable(shape):
return tf.Variable(tf.constant(0.1, shape=shape, name='biases'))
def Conv2d(x, W, b, strides=1):
# Conv2D wrapper, with bias and relu activation
filter_size = W.get_shape().as_list()
pad_size = filter_size[0]//2
pad_mat = np.array([[0,0],[pad_size,pad_size],[pad_size,pad_size],[0,0]])
x = tf.pad(x, pad_mat)
x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='VALID')
x = tf.nn.bias_add(x, b)
return tf.nn.relu(x)
def network(images):
# Convolution 1
img_concat = tf.concat([images,images], -1)
input_tensor = img_concat
with tf.name_scope('conv1'):
weights = WeightsVariable([5,5,2,32])
biases = BiasVariable([32])
conv1 = tf.nn.relu(Conv2d(input_tensor, weights, biases))
flat1 = tf.reshape(conv1, [-1, 28 * 28 * 32])
# Fully Connected 2
with tf.name_scope('fc2'):
weights = WeightsVariable([28 * 28 * 32, 10])
biases = BiasVariable([10])
fc2 = tf.nn.relu(tf.matmul(flat1, weights) + biases)
return fc2
The network run fine in tensorflow, but I am getting TensorRT errors from engine building. Here is the full output info log (using TensorRT 5):
[TensorRT] INFO: UFFParser: parsing Placeholder
[TensorRT] INFO: UFFParser: parsing concat
[TensorRT] ERROR: Parameter check failed at: ../builder/Layers.h::setAxis::334, condition: axis>=0
[TensorRT] INFO: UFFParser: parsing conv1/Variable
[TensorRT] INFO: UFFParser: parsing conv1/Conv2D
[TensorRT] INFO: UFFParser: parsing conv1/Variable_1
[TensorRT] INFO: UFFParser: parsing conv1/BiasAdd
[TensorRT] ERROR: conv1/Conv2D: kernel weights has count 1600 but 22400 was expected
[TensorRT] ERROR: UFFParser: Parser error: conv1/BiasAdd: The input to the Scale Layer is required to have a minimum of 3 dimensions.
How can I modify the network to have it run on TensorRT?