Implementation of atan2f() with improved accuracy (no negative impact on performance)

The fourth revision of a report comparing the accuracy of math function in commonly used math libraries became available recently:

Vincenzo Innocente and Paul Zimmermann, “Accuracy of Mathematical Functions in Single, Double, Extended Double and Quadruple Precision”, February 2023 ⟨hal-03141101v4⟩

This covers CUDA 11.8 and shows that there is still room for improvement in CUDA’s standard math library. For atan2f() the worst case error the authors of the report found was 2.18 ulp, which I confirmed. Since the computation of atan2f() contains a division, it is sensitive to the -prec-div setting of the compiler. With -prec-div=false I found a maximum error of 2.93 ulp.

The alternative implementation of atan2f() below lowers the maximum error to 1.62 ulp with the compiler’s default setting -prec-div=true; the maximum error is unchanged at 2.93 ulp with -prec-div=false, however the percentage of correctly rounded results is increased.

The performance of the alternate version my_atan2f() was no worse than the built-in function on a sm_75 platform I tested on. Based on the respective code characteristics I do not expect any negative performance impact on any GPU architecture currently supported by CUDA.

[Code below updated 3/14/2023]

/*
  Copyright (c) 2023, Norbert Juffa

  Redistribution and use in source and binary forms, with or without 
  modification, are permitted provided that the following conditions
  are met:

  1. Redistributions of source code must retain the above copyright 
     notice, this list of conditions and the following disclaimer.

  2. Redistributions in binary form must reproduce the above copyright
     notice, this list of conditions and the following disclaimer in the
     documentation and/or other materials provided with the distribution.

  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 
  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 
  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
  HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 
  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 
  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/

__forceinline__ __device__ float raw_rcp (float a)
{
    float r;
    asm ("rcp.approx.ftz.f32 %0,%1;" : "=f"(r) : "f"(a));
    return r;
}

__device__ float my_atan2f (float y, float x)
{
    float mx, mn, xa, ya, xy, a, p, q, r, s, t;
    
    if ((y == 0.0f) && (x == 0.0f)) {
        r = signbit (x) ? 0x1.921fb6p+1f : 0.0f; // pi, 0
    } else if (isinf (x) && isinf (y)) {
        r = signbit (x) ? 0x1.2d97c8p+1f : 0x1.921fb6p-1f; // 3*pi/4, pi/4
    } else {
        xy = x + y;
        xa = fabsf (x);
        ya = fabsf (y);
        mn = fminf (xa, ya);
        mx = fmaxf (xa, ya);
        a = mn / mx;
        s = a * a;
        q =          s + 1.13353987e+1f;  //  0x1.6abb96p+3
        q = fmaf (q, s,  2.88424511e+1f); //  0x1.cd7aaep+4
        q = fmaf (q, s,  1.96966705e+1f); //  0x1.3b2590p+4
        q = raw_rcp (q);
        p =             -8.23362887e-1f;  // -0x1.a58fd2p-1
        p = fmaf (p, s, -5.67486715e+0f); // -0x1.6b3106p+2
        p = fmaf (p, s, -6.56555414e+0f); // -0x1.a4320ap+2
        t = s * a;
        p = p * t;
        r = fmaf (p, q, a);
        if (ya > xa) {
            r = fmaf (0x1.ddcb02p-1f, 0x1.aee9d6p+0f, -r); // pi/2 - r
        }
        if (x < 0.f) {
            r = 0x1.921fb6p+1f - r; // pi - r
        }
        if (isnan (xy)) r = xy;
    }
    r = r * __int_as_float ((__float_as_int (y) & 0x80000000) | 0x3f800000);
    return r;
}
1 Like