I created a model with Keras. Everything is ok with Keras.
But when running inference using TensorRT the dimension is not correct.
You can re-produce by making a simple network in Keras and comparing the output size (HWC*4) of Keras and Tensorrt:
x = ZeroPadding2D((3, 3))(img_input) x = Conv2D(12, kernel_size=(3,3), strides=(1, 1), dilation_rate=(2, 2), padding='same')(x)
I found the problem because using padding=‘same’.
So how to fix it?
=> fixed it by changing the model.