I am trying to implement Squeeze-and-excitation network in keras and convert the frozen graph to tensorrt. But I am facing unsupported operation when using convert-to-uff tool. Here is the code for the SE net that is giving problems before/after the Multiply operation
def squeeze_excite_block(inp, filters, ratio=16): x = GlobalAveragePooling2D()(inp) x = Dense(filters // ratio, activation='relu', kernel_initializer='he_normal', use_bias=False)(x) x = Dense(filters, activation='sigmoid', kernel_initializer='he_normal', use_bias=False)(x) x = Reshape((1, 1, filters))(x) x = Multiply()([inp, x]) return x
I call this function in the ResNet before applying the shortcut like below
.... x = BatchNormalization(axis=3)(x) x = squeeze_excite_block(x, filters) x = Add()([x,inp]) x = Activation("relu")(x) ....
The channels axis is last and x is an (8,8,64) tensor.
When converting the pb file I get warnings only – and that is after several attemps trying
to figure out where to put the reshape for the convertion to pass through.
Converting to UFF graph DEBUG: convert reshape to flatten node Warning: keepdims is ignored by the UFF Parser and defaults to True DEBUG: convert reshape to flatten node
But then when trying to use the converted uff model I get the following errors
add_2/add: elementwise inputs must have same dimensions or follow broadcast rules (input dimensions were [8,8,64] and [64,8,8]) conv2d_9/convolution: at least three non-batch dimensions are required for input UFFParser: Parser error: batch_normalization_9/batchnorm/mul_1: The input to the Scale Layer is required to have a minimum of 3 dimensions.
So it is failing at Add line for the shortcut. It seems it has somehow transposed the (8,8,64) tensor to (64,8,8) when doing the multiply for some reason.
I have also tried to use RepeatVector(64) before the multiply doing the broadcast manually but
that also fails with “Unsupported ExpandDims” operation.
Is there a way to modify it so that SE nets are possiple in TensorRT ?