Multiplication in Half and Accumulation in Single

I remeber it was poosible to set the compute type of Tensor Core to do the multiplication in FP16 and acumulation in FP32, but now I am checking the documnetation I am not seeing this kinde of operation. Am I right or I should check someting else?

Also For 2 type that mentioned bellow. Is CUBLAS_COMPUTE_16F more accurate?

CUBLAS_COMPUTE_16F This is the default and highest-performance mode for 16-bit half precision floating point and all compute and intermediate storage precisions with at least 16-bit half precision. Tensor Cores will be used whenever possible.
CUBLAS_COMPUTE_16F_PEDANTIC This mode uses 16-bit half precision floating point standardized arithmetic for all phases of calculations and is primarily intended for numerical robustness studies, testing, and debugging. This mode might not be as performant as the other modes since it disables use of tensor cores.