PI definition error cl_platform.h seems to have errors in definitions of math constants

I downloaded NVIDIA’s GPU Computing Toolkit v4.0, and in the file \include\CL\cl_platform.h, the following lines appear:

#define  CL_M_PI            3.141592653589793115998

...

#define  CL_M_PI_F          3.14159274101257f

First of all, these differ from each other after 7 significant digits. Second, these are both incorrect according to the American Mathematics Society (AMS :: Feature Column from the AMS), piday.org (http://www.piday.org/million.php) and many other sources which agree that the first 22 digits of pi are 3.141592653589793238462. Why is NVIDIA’s definition wrong? Is this going to be changed? Are there other mathematical constants that are incorrect?

My guess is that these values map exactly to the single and double precision numbers closest to the actual value of pi. I don’t know what compilers do when you write out a floating point constant with more significant figures than can be represented. If compilers round toward zero, then you would need to do a trick like this to instead round to the nearest value. Hopefully someone who understands compilers and IEEE standards can chime in here.

I check one case in the Python interpreter, and I find that the true value of PI and the “incorrect” value given in the header map to the same double precision value. CL_M_PI will have no affect on the accuracy of your calculation in double precision floating point.

So, I don’t have an authoritative answer for you, but it sounds like this is a deliberate choice to deal with the properties of floating point representations.

A simple test I compiled in Visual Studio 2008 shows that the different digits are beyond the accuracy of double and floating point precision. After your post, I thought maybe it was to compensate for floating point rounding when converting from decimal to binary during compiling, but that does not seem to be the case, either. It still seems like there are wrong digits of pi in the header file, even though they don’t change any values in programs.

#include <iostream>

using namespace std;

int main(int argc,char** argv){

	double nvidiaPI_d=3.141592653589793115998;

	double realPI_d  =3.141592653589793238463;

	float  nvidiaPI_f=3.14159274101257f;

	float  realPI_f  =3.14159265358979f;

	printf("nvidiaPI_d: %.20f\n",nvidiaPI_d);

	printf("realPI_d  : %.20f\n",realPI_d  );

	printf("nvidiaPI_f: %.20f\n",nvidiaPI_f);

	printf("realPI_f  : %.20f\n",realPI_f  );

	return 0;

}

program output:

nvidiaPI_d: 3.14159265358979310000

realPI_d  : 3.14159265358979310000

nvidiaPI_f: 3.14159274101257320000

realPI_f  : 3.14159274101257320000

Notice the printed values are accurate to the same digits that the nvidia header files are.

This StackOverflow question includes a copy of the C99 standard:

http://stackoverflow.com/questions/649090/how-are-floating-point-literals-in-c-interpreted

Unfortunately, in C it seems that the rounding convention is implementation-dependent. If you follow the link for Java, the language spec says that float literals are rounded to the nearest binary floating point value, which seems to be the sensible way to do things. I wasn’t able to find any mention of float literal rounding in the OpenCL spec using simple keyword searches, but it might be in there somewhere. If OpenCL uses round-to-nearest, then writing out CL_M_PI with the true decimal representation would be equivalent to this, and less confusing. A comment in the header would be nice…