pool int8 result on TensorRT4.0.1.6 and TensorRT5X

I have tried int8 on tensorrt4.0.0.3 and tensorrt3, tensorrt4.0.1.6. And found that detection and segmentation results with int8 precision are too poor on tensorrt4.0.1.6 and tensorrt5X. But the results of int8 on tensorrt3X and tensorrt4.0.0.3 are good. All other things are same, except for the *.so libs and include headers.
Why tensorrt4.0.1.6 and tensorrt5X got such pool int8 results.