A more accurate and faster implementation of powf()

To first order, pow(x,y) = exp(y*log(x)), but pow() is actually one of the harder to implement functions in a standard math library. First, there are a ton of special cases (enumerated in the ISO C99 standard, for example) to consider. Second, in order to compute pow() accurately, the logarithm needs to be computed with more than native precision to compensate for the error magnification properties of the exponential function. For a single-precision implementation, one would ideally want the maximum relative error in the logarithm to be <= 2**(-31) ~ 4.6e-10.

This would be trivial to achieve by the us of double precision arithmetic for the highest-order terms of the core approximation. Alas, on most NVIDIA GPUs this approach would not be performance competitive due to the huge disparity in throughput for single-precision and double-precision operations. A workable alternative is the use of double-float techniques for these terms, making best possible use of FMA and eliminating the normalization step typically used at the end of every double-float operation.

The code below implements two alternatives in the extended-precision computation of the logarithm, depending on whether preference is given to accuracy of speed. Two-argument functions are difficult to test exhaustively, the data below is from extensive tests using about 2**42 test vectors.

CUDA 8.0 built-in powf()    max. ulp err. = 8.87789 @ (1.39190876, -259.933838)
my_powf(), PREFER_SPEED     max. ulp err. = 5.71140 @ (0.71992141,  259.478912)
my_powf(), PREFER_ACCURACY  max. ulp err. = 2.09680 @ (0.70102614, -243.910400)

Performance gains are modest. There are probably ways to squeeze a few more percent of performance out of the code. Currently, using the CUDA 8.0 tool chain, I measure the following throughput on a Quadro K2200 (sm_50):

CUDA 8.0 built-in powf()   5.716e9 function calls / second
my_powf(), PREFER_SPEED    6.338e9 function calls / second (+11%)
my_powf(), PREFER_ACCURACY 6.010e9 function calls / second (+5%)

The code below has been tested using nvcc’s default compilation mode. Adjustments may need to be made when compiling in other modes, or when porting to non-CUDA platforms. powf() is a fairly complicated function and my test framework isn’t full industrial strength, so some corner case may have slipped through testing. I will try to address functional deficiencies brought to my attention.

/*
  Copyright (c) 2016-2017, Norbert Juffa
  All rights reserved.

  Redistribution and use in source and binary forms, with or without 
  modification, are permitted provided that the following conditions
  are met:

  1. Redistributions of source code must retain the above copyright 
     notice, this list of conditions and the following disclaimer.

  2. Redistributions in binary form must reproduce the above copyright
     notice, this list of conditions and the following disclaimer in the
     documentation and/or other materials provided with the distribution.

  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 
  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 
  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
  HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 
  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 
  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/

#define PREFER_SPEED     0
#define PREFER_ACCURACY  1

#define LOG_VARIANT (PREFER_SPEED)

/* Compute natural logarithm of the argument with extended precision, returning
   the result as a normalized double-float loghi:loglo. If LOG_VARIANT is set 
   to PREFER_SPEED, maximum relative error of the result is 3.64709837e-9. If
   LOG_VARIANT is set to PREFER_ACCURACY, maximum relative error of the result
   is 8.56450852e-10.
*/
__device__ void my_logf_ext (float a, float *loghi, float *loglo)
{
    const float LOG2_HI =  6.93147182e-1f; //  0x1.62e430p-1f
    const float LOG2_LO = -1.90465421e-9f; // -0x1.05c610p-29f;

    float m, r, i, s, t, p, qhi, qlo;
    int e;

    /* Reduce argument to m in [181/256, 362/256] */
    i = 0.0f;
    if (a < 1.175494351e-38f){ // 0x1.0p-126
        a = a * 8388608.0f; // 0x1.0p+23
        i = -23.0f;
    }
    e = (__float_as_int (a) - __float_as_int (0.70703125f)) & 0xff800000;
    m = __int_as_float (__float_as_int (a) - e);
    i = fmaf ((float)e, 1.19209290e-7f, i); // 0x1.0p-23

    /* Compute q = (m-1)/(m+1) as a double-float qhi:qlo */
    p = m + 1.0f;
    m = m - 1.0f;
    asm ("rcp.approx.ftz.f32 %0, %1;\n\t" : "=f"(r) : "f"(p)); // r = 1.0f / p
    qhi = m * r;
    t = fmaf (qhi, -2.0f, m);
    s = fmaf (qhi, -m, t);
    qlo = r * s;

    /* Approximate atanh(q), q in [-75/437, 53/309] */ 
    s = qhi * qhi;
    r =             1.29455566e-1f;  // 0x1.092000p-3
    r = fmaf (r, s, 1.41983002e-1f); // 0x1.22c7fcp-3
    r = fmaf (r, s, 2.00014994e-1f); // 0x1.99a176p-3
    r = fmaf (r, s, 3.33333254e-1f); // 0x1.555550p-2
#if LOG_VARIANT == PREFER_SPEED
    r = r * s;
    s = fmaf (r, qhi, fmaf (r, qlo, qlo)); // r*(qhi:qlo) + qlo
#elif LOG_VARIANT == PREFER_ACCURACY
    t = fmaf (qhi, qlo + qlo, fmaf (qhi, qhi, -s)); // s:t = (qhi:qlo)**2
    p = s * qhi;
    t = fmaf (s, qlo, fmaf (t, qhi, fmaf (s, qhi, -p))); // p:t = (qhi:qlo)**3
    s = fmaf (r, p, fmaf (r, t, qlo));
#else
#error unsupported LOG_VARIANT
#endif    

    /* log(a) = 2 * atanh(q) + i * log(2) */
    t = fmaf ( LOG2_HI * 0.5f, i, qhi);
    p = fmaf (-LOG2_HI * 0.5f, i, t);
    s = (qhi - p) + s;                        
    s = fmaf ( LOG2_LO * 0.5f, i, s);
    r = t + t;
    *loghi = t = fmaf (2.0f, s, r);
    *loglo = fmaf (2.0f, s, r - t);
}

/* Compute exp(a). No check for overflow or underflow is performed.
   Maximum error = 0.86565 ulp
*/
__device__ float my_expf_unchecked (float a)
{
    float f, r, s, t, j;
    int i, ia;

    // exp(a) = 2**i * exp(f); i = rintf (a / log(2))
    j = fmaf (1.442695f, a, 12582912.f) - 12582912.f; // 0x1.715476p0, 0x1.8p23
    f = fmaf (j, -6.93145752e-1f, a); // -0x1.62e400p-1  // log_2_hi 
    f = fmaf (j, -1.42860677e-6f, f); // -0x1.7f7d1cp-20 // log_2_lo 
    i = (int)j;
    // approximate r = exp(f) on interval [-log(2)/2, +log(2)/2]
    r =             1.38187408e-3f;  // 0x1.6a4000p-10
    r = fmaf (r, f, 8.37522652e-3f); // 0x1.12707ep-7
    r = fmaf (r, f, 4.16693948e-2f); // 0x1.555b0ep-5
    r = fmaf (r, f, 1.66664496e-1f); // 0x1.555432p-3
    r = fmaf (r, f, 4.99999821e-1f); // 0x1.fffff4p-2
    r = fmaf (r, f, 1.00000000e+0f); // 0x1.000000p+0
    r = fmaf (r, f, 1.00000000e+0f); // 0x1.000000p+0
    // exp(a) = 2**i * r;
    ia = (i > 0) ?  0 : 0x83000000;
    s = __int_as_float (0x7f000000 + ia);
    t = __int_as_float ((i << 23) - ia);
    r = r * s;
    r = r * t;
    return r;
}

/* a**b = exp (b * log (a)), where a > 0, and log(a) is computed with extended 
   precision as a double-float. 
*/
__device__ float my_powf_core (float a, float b)
{
    const float MAX_IEEE754_FLT = __int_as_float (0x7f7fffff);
    const float EXP_OVFL_UNFL_F = 104.0f;
    const float MY_INF_F = __int_as_float (0x7f800000);
    float lhi, llo, thi, tlo, phi, plo, r;

    /* compute lhi:llo = log(a) */
    my_logf_ext (a, &lhi, &llo);
    /* compute phi:plo = b * log(a) */
    thi = lhi * b;
    tlo = fmaf (lhi, b, -thi);
    tlo = fmaf (llo, b, +tlo);
    /* normalize intermediate result thi:tlo, giving final result phi:plo */
    phi = __fadd_rz (thi, tlo);// avoid premature ovfl in exp() computation
    plo = (thi - phi) + tlo;
    /* exp'(x) = exp(x); exp(x+y) = exp(x) + exp(x) * y, for |y| << |x| */
    r = my_expf_unchecked (phi);
    /* prevent generation of NaN during interpolation due to r = INF */
    if (fabsf (r) <= MAX_IEEE754_FLT) {
        r = fmaf (plo, r, r);
    }
    /* severe overflow / underflow in exp() */
    if (fabsf (thi) > EXP_OVFL_UNFL_F) {
        r = (thi < 0.0f) ? 0.0f : MY_INF_F;
    } 
    return r;
}

/* Compute a**b. If LOG_VARIANT is PREFER_SPEED, maximum error observed in very
   extensive testing is 5.71140 ulp. If LOG_VARIANT is PREFER_ACCURACY observed 
   in very extensive testing is 2.09680 ulp. Because testing is not exhaustive 
   the error bounds cannot be guaranteed.
*/
__device__ float my_powf (float a, float b)
{
    const float MY_NAN_F = __int_as_float (0x7fffffff);
    const float MY_INF_F = __int_as_float (0x7f800000);
    int expo_odd_int;
    float r;

    r = my_powf_core (fabsf (a), b);
    /* special case handling per ISO C specification */
    expo_odd_int = fmaf (-2.0f, floorf (0.5f * b), b) == 1.0f;
    if ((a < 0.0f) && expo_odd_int) {
        r = -r;
    }
    if ((a == 1.0f) || (b == 0.0f)) {
        r = 1.0f;
    } else if (isnan (a) || isnan (b)) {
        r = a + b;  // convert SNaN to QNanN or trigger exception
    } else if (isinf (b)) {
        r = ((fabsf (a) < 1.0f) != (b < 0.0f)) ? 0.0f :  MY_INF_F;
        if (a == -1.0f) r = 1.0f;
    } else if (isinf (a)) {
        r = (b < 0.0f) ? 0.0f : MY_INF_F;
        if ((a < 0.0f) && expo_odd_int) r = -r;
    } else if (a == 0.0f) {
        r = (expo_odd_int) ? (a + a) : 0.0f;
        if (b < 0.0f) r = copysignf (MY_INF_F, r);
    } else if ((a < 0.0f) && (b != floorf (b))) {
        r = MY_NAN_F;
    }
    return r;
}

Thanks for posting!

How does the ‘PREFER_SPEED’ implementation compare to the intrinsic ‘__powf()’ in terms of performance?

__powf (x, y) is literally __expf (y * __logf (x)), where most of the heavy lifting is done by SFU instructions. So the expected performance difference is roughly an order of magnitude in favor of __powf().

My simple test framework (on the Quadro K2200) pegs __powf() at round 50e9 functions calls per second in default compilation mode, and around 75e9 functions calls per second with -ftz=true. Due to the unsophisticated nature of my performance measurement framework this is likely underestimating the performance of __powf() by non-trivial amounts, as the impact of the loads and stores in the code becomes exposed.

Generally speaking, “regular” math function implementations cannot compete with CUDA’s device intrinsics for performance, the race is not even close.