[TensorRT Python API] Convolutional layer error

I have the following architecture

##########################
## AllCNN No Batch Norm
##########################
Tensor("input_layer_1:0", shape=(?, 28, 28, 1), dtype=float32)
Tensor("AllCNNM/conv2d/Relu:0", shape=(?, 14, 14, 16), dtype=float32)
Tensor("AllCNNM/conv2d_2/Relu:0", shape=(?, 7, 7, 32), dtype=float32)
Tensor("AllCNNM/conv2d_3/Relu:0", shape=(?, 5, 5, 64), dtype=float32)
Tensor("AllCNNM/Conv2d/conv2d/Relu:0", shape=(?, 3, 3, 10), dtype=float32)
Tensor("AllCNNM/Mean:0", shape=(?, 10), dtype=float32)
Tensor("AllCNNM/Softmax:0", shape=(?, 10), dtype=float32)

Whenever I try to do the UFF to make an infer engine I get:

[TensorRT] INFO: Detecting Framework
[TensorRT] INFO: Parsing Model from uff
[TensorRT] INFO: UFFParser: parsing input_layer_1
[TensorRT] INFO: UFFParser: parsing AllCNNM/conv2d/kernel
[TensorRT] INFO: UFFParser: parsing AllCNNM/conv2d/Conv2D
[TensorRT] INFO: UFFParser: Convolution: add Padding Layer to support asymmetric padding
[TensorRT] INFO: UFFParser: Convolution: Left: 1
[TensorRT] INFO: UFFParser: Convolution: Right: 1
[TensorRT] INFO: UFFParser: Convolution: Top: 1
[TensorRT] INFO: UFFParser: Convolution: Bottom: 2
[TensorRT] INFO: UFFParser: parsing AllCNNM/conv2d/bias
[TensorRT] INFO: UFFParser: parsing AllCNNM/conv2d/BiasAdd
[TensorRT] INFO: UFFParser: parsing AllCNNM/conv2d/Relu
[TensorRT] INFO: UFFParser: parsing AllCNNM/conv2d_1/kernel
[TensorRT] INFO: UFFParser: parsing AllCNNM/conv2d_2/Conv2D
python: Network.h:104: virtual nvinfer1::DimsHW nvinfer1::NetworkDefaultConvolutionFormula::compute(nvinfer1::DimsHW, nvinfer1::DimsHW, nvinfer1::DimsHW, nvinfer1::DimsHW, nvinfer1::DimsHW, const char*): Assertion `(input.w() + padding.w() * 2) >= dkw && "Image width with padding must always be at least the width of the dilated filter."' failed.

Any ideas ?