int8 mode is different between 1080ti with 2080ti

1080ti env: 2080ti env:
cuda9.0 cuda10.0
cudnn7.5 cudnn7.5
python3.7 python3.7
TensorRT5.1.2 TensorRT5.1.2
ubuntu16.04 ubuntu16.04

set max batch size 32,under 1080ti,yolov3 int8 trt model is 60M,but get 126M int8 model under 2080ti. besides, i notice that there is just a little difference between int8 and fp16 mode under 2080ti, trt model size are both near 120M. Is this normal?