Error in UFF Parsing - Add layer - "all elementwise inputs must have same dimensions"

I’m trying to implement a variant of Resnet in Jetson TX2.

I built and trained my model in Keras, froze the model to .pb, converted it to uff with convert-to-uff. All this happened successfully.

Then used the sampleUffMNIST.cpp as a reference to load the uff model for inference in Jetson. While loading it, I got the following error:

ERROR: add_3/add: all elementwise inputs must have same dimensions or follow the broadcasting rules
ERROR: sample_uff: Unable to create engine
ERROR: sample_uff: Model load failed

This error comes from the add_3 layer, that adds the outputs of two Conv+activation layers. From model summary in keras, I can see those two Conv layers have the same output shape: (?,128,23,40) and the add_3 layer also has the same output shape.

What is wrong here? Isn’t there any way to implement resnets in Jetson?

Any kind of help is appreciated…

Here’s my model architecture: https://drive.google.com/open?id=1Dxc3FJkUyNxiFj0xaYWKy1rwrfreGtV-

Thanks!

Hi,

It looks like the inputs of add_3/add have the different dimensions.
Could you check the output dimension of activation_7 and activation_8 in uff model?

Ex.

uff_model = uff.from_tensorflow(sess.graph_def, ['activation_7'])

parser = uffparser.create_uff_parser()
parser.register_input(...)
parser.register_output('activation_7')

engine = trt.utils.uff_to_trt_engine(G_LOGGER, uff_model, parser, MAX_BATCHSIZE, MAX_WORKSPACE, trt.infer.DataType.FLOAT)
print 'TensorRT output shape=(%d, %d, %d)' % (engine.get_binding_dimensions(2).to_DimsCHW().C(),engine.get_binding_dimensions(2).to_DimsCHW().H(), engine.get_binding_dimensions(2).to_DimsCHW().W())

Thanks.

I rewrote the residual block of my architecture using pure tensorflow ops (without keras), without any reshape or transpose operations. UFF generated from that also gives the same error:

ERROR: Add_2: all elementwise inputs must have same dimensions or follow the broadcasting rules.

I opened the uff in the C++ TensorRT interface and observed the dimensions of the inputs of the Add_2 layer. They are wrong: ‘Conv_G_relu’ = (128,22,40) and ‘Conv_H_relu’ is (128,23,40). However, the model is training well in tensorflow. So, the dimensions are correct in the tensorflow graph. I suspect something weird is happening with the UFF converter.

Here are my new files (written with pure tensorflow ops): https://drive.google.com/open?id=1G3eUakMQumHCm2IFEfH6nGcmBq0gtkR-

What could be the issue?

After checking the shapes of tensors in tensorflow with various configurations, I think there might be a bug in NVIDIA’s uff converter that confuses the padding=“VALID” convolutions of tensorflow.

Hi,

Thanks for the experiment.

There is a known issue on padding output size: https://devtalk.nvidia.com/default/topic/1028045
And the fix is available in our latest TensorRT release.
Do you mind to test your model with our latest package first?
https://developer.nvidia.com/nvidia-tensorrt-download

Thanks.

I updated my Jetpack version (with TRT 4) and the code worked like a charm. Thanks.

But kindly keep in mind of the people who use Nvidia packages like tensorrt on Jetson. We cannot simply update tensorrt, we need to flash it again, which screws other installations and setups we have done. Some way of updating libraries without Jetpack would be appreciated.

Thanks for the support anyway.

I used conv1D, and after changing it to conv2D, the problem was fixed.
Thank you very much