How to know Quantization error in TensorRT2.1 for quantized models into INT8/FP16 ?

Hello,

Can anyone please let me know if we can calculate or visualize the Quantization error in TensorRT2.1 using half-precision or INT8 quantization ?

Thanks for your help !