TensorRT Unsupported Layer Flatten

On my host machine I am using Keras with Tensorflow as the backend to develop neural networks. On the host machine I am saving the network as a .uff for use on the Jetson TX2. I’ve modified the SampleUffMNIST.cpp code to load my own .uff file “foo.uff” but when I execute the code on the TX2 I get the following error:

ERROR: UFFParser: Validator error: flatten_1/Reshape: Unsupported operation Flatten

Which seems weird to me because according to the TensorRT documentation, Flatten is a supported layer.

Can anyone provide any insight into why I’m getting this error?

I installed TensorRT on the TX2 via the Jetpack 3.2 developer’s preview so it should support Tensorflow models.

Thanks

I have the same problem. I have a TF model with a flatten layer (created with tf.contrib.layers.flatten). I am able to create a UFF model from the TF model just fine, but when I go to parse it, I get the same error you are getting.

The TensorRT documentation is confusing, though. Section 1.1 lists that TensorRT has a Flatten layer, but under Section 2.3.2.2.4 (“Supported TensorFlow Operations”), the documentation does NOT list Flatten as being supported.

Hi,

Flatten operation is available until TensorRT 3.0 GA(libnvinfer4.0.1).
Jetson is using TensorRT 3.0 RC(libnvinfer4.0.0), which doesn’t support this ops yet.

Please pay attention to our announcement for the release of JetPack.
Although we cannot disclosure concrete schedule, it comes soon.

Thanks.

Well I excitedly await the new release!

Also dwd_pete, in the meantime I replaced my Flatten layer with a Reshape layer which accomplished the same thing. However I have other problems with my convolutional layers having “dimension 0” which I don’t understand. The model works on my host machine so I’m not sure what I’ve done wrong. I think I may switch to implementing my network in Caffe as opposed to Keras/Tensorflow since there seems to be better support for Caffe models on the TX2.

Hello @AastaLLL,

As you said the Flatten operation is available until rt 3.0GA, I was wondering if there is one API call “addFlatten()” which is similar to “addScale()” in the C++ API? I’m constructing network definition using c++ but cannot find any docs or definitions in NvInfer.h.

Could you please help me answer this question?

Thanks.

Hi,

Currently, TensorRT only supports flatten layer which is placed in front of FullyConnected layers.
In this case, TensorRT implicitly flattens the input and no extra layer needs to be added.

You can find this information here:
http://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#layers

Flatten
The Flatten layer flattens the input while maintaining the batch_size. Assumes that the first dimension represents the batch. The Flatten layer can only be placed in front of the Fully Connected layer.

Thanks.