I use TensorRT-5.0 for int8 calibration. The origin model is an object detection caffemodel.
I find the performance drops sharply when the gorundtruth objects are small in image.
I crop the origin image to make the object bigger, but it just makes little sense. It will be worser if the gorundtruth objects include both small and big objects. Many objects are lost in inferring using int8 calibration model.
I would like to know how dose TensorRT do the int8 calibration. Why it failed in small objects?