SE net in UFF format

I am trying to implement Squeeze-and-excitation network in keras and convert the frozen graph to tensorrt. But I am facing unsupported operation when using convert-to-uff tool. Here is the code for the SE net that is giving problems before/after the Multiply operation

def squeeze_excite_block(inp, filters, ratio=16):

    x = GlobalAveragePooling2D()(inp)
    x = Dense(filters // ratio, activation='relu', 
                  kernel_initializer='he_normal', 
                  use_bias=False)(x)
    x = Dense(filters, activation='sigmoid', 
                  kernel_initializer='he_normal', 
                  use_bias=False)(x)
    x = Reshape((1, 1, filters))(x)
    x = Multiply()([inp, x])
    return x

I call this function in the ResNet before applying the shortcut like below

....
        x = BatchNormalization(axis=3)(x)
        x = squeeze_excite_block(x, filters)
        x = Add()([x,inp])
        x = Activation("relu")(x)
        ....

The channels axis is last and x is an (8,8,64) tensor.
When converting the pb file I get warnings only – and that is after several attemps trying
to figure out where to put the reshape for the convertion to pass through.

Converting to UFF graph
DEBUG: convert reshape to flatten node
Warning: keepdims is ignored by the UFF Parser and defaults to True
DEBUG: convert reshape to flatten node

But then when trying to use the converted uff model I get the following errors

add_2/add: elementwise inputs must have same dimensions or follow broadcast rules (input dimensions were [8,8,64] and [64,8,8])
conv2d_9/convolution: at least three non-batch dimensions are required for input
UFFParser: Parser error: batch_normalization_9/batchnorm/mul_1: The input to the Scale Layer is required to have a minimum of 3 dimensions.

So it is failing at Add line for the shortcut. It seems it has somehow transposed the (8,8,64) tensor to (64,8,8) when doing the multiply for some reason.
I have also tried to use RepeatVector(64) before the multiply doing the broadcast manually but
that also fails with “Unsupported ExpandDims” operation.

Is there a way to modify it so that SE nets are possiple in TensorRT ?

Not all warnings can be ignored. For example, replacing keepdims with a default layer set to true is probably not going to work. You could try writing a custom plugin to handle that layer as a first step.

I am not sure which layer is unsupported. The multiply operation seem to be supported because if I use

def squeeze_excite_block(inp, filters, ratio=16):
    x = Multiply()([inp, x])
    return x

it works even though gives wrong results. Reshape also seems to be supported.

Thanks.

I have made this work by changing to Channels first format for the tensorflow network.
However, I do not like this solution because this means I can not run infernece on CPU using tensorflow since it uses NHWC.

I suspect the problem is with tensorflow inserting transpose operatons from NHWC to NCHW when
running on the GPU. It would be great if TensorRT works with the squeeze_and_excite block
written in channels last format.

Finally found a solution that works both on CPU and GPU.
The solution is to always do the squeeze_and_excite with NCHW by transposing the input,
and then transposing again the output. Now TensorRT does not choke on it and also works on CPU.

def squeeze_excite_block(inp, filters, ratio=16):
    """Channels scaling with squeeze and excitation
    """
    inp = Permute((3,1,2))(inp)
    x = GlobalAveragePooling2D('channels_first')(inp)
    x = Dense(filters // ratio, activation='relu', 
                  kernel_initializer='he_normal', 
                  use_bias=False)(x)
    x = Dense(filters, activation='sigmoid', 
                  kernel_initializer='he_normal', 
                  use_bias=False)(x)
    x = Reshape((filters,1,1))(x)
    x = Multiply()([inp,x])
    x = Permute((2,3,1))(x)
    return x