Strange precision problem

Hi,

I am implementing a Lua wrapper for a kind of vector object that will reside purely into device memory, ie you can apply a sequence of kernels to this vector without having to copy the data back and forth between device and host.

The memory is initialized like this :

// tensor->storage is a float*

cutilSafeCall(cudaMalloc((void**)&(tensor->storage), totalSize));

Then, I can assign a single value to an element of the vector :

float value = 1.2;

long index = 0;

cutilSafeCall(cudaMemcpy(tensor->storage + index, &value, sizeof(float), cudaMemcpyHostToDevice));

and I retrieve the value to print it like that :

float value = 0;

long index = 0;

cutilSafeCall(cudaMemcpy(&value, tensor->storage + index, sizeof(float), cudaMemcpyDeviceToHost));

printf("value = %f\n", value);

But if I run that, I get the following when printing the retrieved value :

value = 1.2000000476837

However, if I assign an integer value, like 1, the retrieved value is correct. Is this some kind of weird rounding problem, or am I doing something wrong ? I am using CUDA 3.0 beta 1, on Windows 7 x64 with Visual Studio 2008 and a GTX 275 GPU.

Thanks in advance !

Nothing wrong, this is the expected behavior.

You are converting the decimal number 1.2 to a binary floating-point number. 1/10 cannot be exactly represented in binary, nor can 1 + 2 * 1/10 = 1.2, so it has to be rounded to the nearest binary FP number.

Then you convert it back to decimal when printing it. The binary FP closest to 1.2 is not representable exactly in decimal, so another rounding occurs.

1 and other integers are representable exactly both in decimal and in binary, so there is no rounding error in this case.

As a rule of thumb, a single-precision binary float has a precision of 7 decimal digits, and additional digits are usually meaningless.

Ok, that’s what I thought. It’s not really an issue for me, but I find it kind of ugly. Should I simply round all my results to the 7th decimal ?

Blame the computer architects from the 1960’s who decided decimal floating-point would be too inefficient. External Media

Anyway, keep in mind that if you do any serious computation on your numbers it is very unlikely that you’ll get an exact result, no matter which radix your computer uses. You’ll also have to consider where your input data comes from: are they exact decimal values, or rather approximate measurements, and in the latter case, what are their accuracy?

Then your calculation may amplify or reduce this measurement error, and at the same time introduce other rounding errors… So knowing how many significant digits your result holds is not an easy question.

But if your question is: how many digits do I need so that I don’t loose information when converting to decimal? (so I can convert it back to binary later), then the answer is 9 digits for single-precision and 17 digits for double. Using more than that is overkill.