int8 precision

I am trying to run a network that was trained in caffe frame work.
I have followed your example and documentation and have created a calibrator.
I ran with both 16fp and int8
the issue is the precision- while after the first layer of convolution they look kind of similar (although far from being identical), if you go deeper in the layers you got totally different values.

I set the flag of int8_mode, passed a calibrator.
is there something I am missing? maybe some configuration of the calibrator?