Same inference speed with Resnet50 for int8 and fp16

hello, i have trained resnet50 classfier with myself dataset(number_classes=7). And i have tried to inference the trained model for int8 mode,but the inference speed is almost same with fp16 mode.I don’t know why. who can tell me the resons to cause it.THANKS.

Hi,

Could you share more detail about test of the performance?
Do you use the TensorFlow or our TensorRT trtexec binary?

Thanks.

i used pytorch->onnx->tensorrt. So, how can i see the inference precision of each layer? i used python,how can i find the log?