Failed to convert model with deconvolution layer in TensorRT

I try to convert a detecion models from FP32 to INT8 in TensorRT. It goes well when i use a model without deconvolution layer. however, when i try it on a model with deconvolution layer, it faided to detect any objects.My TensorRT version is 4.0/3.0.4. 
I wonder if the TensorRT 4.0 or 3.0.4 supports converting models with deconvolution layer from FP32 to INT8? I turely appreciate anyone can give me some advice.Thanks.