Post quantization aware training is slower than fp16 and post quantization

Hi,

Looks like you’re using Jetson platform, May be INT8 is not supported on Jetson hardware, please check preceison support matrix here.

Please allow us some time to test it on V100. Meanwhile we recommend you to please try mixed precision and fp16.

Thank you.