How to rewrite tensorflows tf.reduce_sum in a way that is can be parsed

Hi,

I have this custom layer as below.

class Conv2DWeightNorm(tf.layers.Conv2D):

  def build(self, input_shape):
    self.wn_g = self.add_weight(
        name='wn_g',
        shape=(self.filters,),
        dtype=self.dtype,
        initializer=tf.initializers.ones,
        trainable=True,
    )
    super(Conv2DWeightNorm, self).build(input_shape)
    square_sum = tf.reduce_sum(
        tf.square(self.kernel), [0, 1, 2], keepdims=False)
    inv_norm = tf.rsqrt(square_sum)
    self.kernel = self.kernel * (inv_norm * self.wn_g)

Currently the

tf.reduce_sum(tf.square(self.kernel), [0, 1, 2], keepdims=False)

is giving me an error

[TensorRT] ERROR: UFFParser: Parser error: model_lr/0/conv2d_weight_norm/Sum: axes.size() != 0 not yet supported for reduce constant nodes

However this should just be a simple linear operation. How can I re-write the reduce_sum step using tf operations that are supported? I’ve looked at the supported TF operations in Support Matrix :: NVIDIA Deep Learning TensorRT Documentation but i think its making me even more confused as to what I can use.

Thanks in advance!

Hi, wdai03
I’ve also got such a problem , did you have any solution?

Reduce sum should be achievable through a simple matrix multiplication since its just a linear operation.

I THINK it should be doable by replacing the tf.reduce_sum with a numpy matrix operation in the code, I’m not sure how the uff parsing works exactly but I imagine a simple matrix add operation can’t be too difficult?

I haven’t gotten around to trying it yet however since its been so long since I’ve looked at the low level calculations in deep learning, please let me know if this works for you!

If that doesn’t work there must be a way to specify it through TensorRTs code directly but that I’m even less familiar of that and unfortunately NVIDIA’s support team only seems to reply selectively.

It should definitely be doable though I think (somehow)

use numpy or simlar tech cannot replace tf.reduce_sum. What i’m doing is mv all feature and weights norm op to loss func instead put them in the layer, this get work for training model and uff transfer

can you explain how you include it in the loss function?

Can you maybe provide an example of what youve done if possible? That would be very helpful!

Thanks!

you can check this file :
https://github.com/Joker316701882/Additive-Margin-Softmax/blob/master/AM_softmax.py

what you need is to add embedding feature normalization, embedding feature is from your last network layer.