 # Accuracy: CUDA 4.0 math vs C math CUDA math functions have lower accuracy than C math ones

I wrote an application that executes a lot of math functions on double precision data, e.g. log, sqrt, exp.
In the CUDA version I did not use the intrinsic functions, and I performed tests on a Nvidia Tesla 2050, i.e. CC 2.0.
Comparing the results of the CPU and GPU version I found that the max absolute error is greater than or equal to 1.0e-6.
Is it possible to have a better accuracy? If yes, please let me know how to obtain it.

Thank you.

The maximum error in ULPs (units of least/lowest precision) is documented in the programming guide. You would have to roll your own implementation of these functions if you need better accuracy.

Christian

An error of this magnitude would hint more at single-precision accuracy than double-precision accuracy. Please note that the math functions are overloaded, and you will get results accurate to double precision only if you pass double-precision data. E.g.

``````double res1;

double res2;

float xf = 0.5f;

double xd = xf;

res1 = sqrt(xf);  // square root accurate to single precision

res2 = sqrt(xd);  // square root accurate to double precision
``````

The double-precision sqrt() is correctly rounded according to IEEE-754 round-to-nearest-or-even. The maximum ulp error in double-precision log() and exp() is very small, and for most inputs their results will match the correctly rounded result.

As cbuchner1 already stated, the error bounds to all math functions are stated in an appendix to the Programming Guide. These have been determined based on extensive testing. If you have specific cases for which you believe these error bounds are being exceeded, I’d be happy to look into it. All arguments should be stated as hexadecimal numbers for ease of reproduction, or if printed in decimal, be printed with 17 decimal places.