It's mixed-precison or full-int8,when doing int8 inference ?

hello ,
I found some layer have the same quantize param with other layer,in calib_table . so , it’s mixed-precison or full-int8,when doing int8 inference ?

concat and pool have the same quantize scale with there’s bottom layer.
squeezenetv1 quantize param is here :

TRT-5105-EntropyCalibration2
data: 3c952f5f
(Unnamed Layer* 0) [Convolution]_output: 3d97228b
conv1: 3d9b1309
pool1: 3d9b1309
(Unnamed Layer* 3) [Convolution]_output: 3e3604a9
fire2/squeeze1x1: 3e14a53f
(Unnamed Layer* 5) [Convolution]_output: 3ded1048
fire2/expand1x1: 3dc33b48
(Unnamed Layer* 7) [Convolution]_output: 3e47d10e
fire2/expand3x3: 3dc33b48
fire2/concat: 3dc33b48

I find TRT’s API have the function to set precision (ILayer::setPrecision),when i watch more document about TRT.
So,op’s implement should not do requantize in the last.and I’am pretty sure the last layer do not using INT8 inference.