FC Layer unsupported with TRT

For VGG-16 model(keras/vgg16.py at master · keras-team/keras · GitHub) fc1, fc2 layer outputs are different from tensorflow vs tensorrt. I get a warning message, “DEBUG: convert reshape to flatten node” during the UFF conversion. How can I fix this ? FC Layer as a CONV works fine from here, models/vgg.py at master · tensorflow/models · GitHub

same issue.

the No. nodes also less than example output. and very low precision.

A workaround i found was to use tf.slim implementation instead.

I had the same problem today. Seems like it’s a slim problem.

I changed

fc_flatten = slim.flatten(net)

to

net_shape = net.get_shape().as_list()
fc_flatten = tf.reshape(net, (net_shape[0], net_shape[1] * net_shape[2] * net_shape[3]))

And problem resolved.

Also want to add my solution similar to #4 . I have Keras model that uses VGG16 as backbone and first layer after VGG16 is Flatten, which has shape (?, ?). This broke TF → UFF → TRT pipeline (so TRT produced complete trash).

Solution was to replace Flatten with Reshape.

-    x = Flatten()(vgg16_model.output)
+    x = Reshape((25088,))(vgg16_model.output)

Just to share my feelings, you guys really helped me out especially #5 as I’m using keras as well. However I still debugged for 1 day for just one problem. I used the reshape(1,1,300) instead of (1,1,300,), This caused the “DEBUG: convert reshape to flatten node” error. I have no idea how this would influence the keras implementation of tf.reshape but it just did. So, if you are using keras’s reshape layer do remember to add the ‘,’ in the last place!! I almost gone mad for this.