Performance drops seriously when do int8 calibration

I use TensorRT-5.0 for int8 calibration. The origin model is an object detection caffemodel.
I find the performance drops sharply when the gorundtruth objects are small in image.
I crop the origin image to make the object bigger, but it just makes little sense. It will be worser if the gorundtruth objects include both small and big objects. Many objects are lost in inferring using int8 calibration model.

I would like to know how dose TensorRT do the int8 calibration. Why it failed in small objects?

Hi, did you follow ssd sample? If so, what modifications you made in prototxt?

I did not use ssd sample. I just train a detection model and use tensorrt to do int8 calibration.