Mixed-Precision ResNet-50 Using Tensor Cores with TensorFlow

Originally published at: https://developer.nvidia.com/blog/mixed-precision-resnet-50-tensor-cores/

Mixed-Precision combines different numerical precisions in a computational method. Using precision lower than FP32 reduces memory usage, allowing deployment of larger neural networks. Data transfers take less time, and compute performance increases, especially on NVIDIA GPUs with Tensor Core support for that precision. Mixed-precision training of DNNs achieves two main objectives: Decreases the required amount of…

thanks for this tutorial, great video.

Hi Shiva, I was looking at sparse_softmax_cross_entropy_with_logits and it seems like they actually automatically upcast fp16 tensors to fp32. I filed an issue for this - feels very sneaky behavior.
https://github.com/tensorfl...
Thus, might no longer need the logits upcast that you suggested.