sin() simple precision error


I am having a simple precision error in some numbers when i do a sin() operation:

float var = 13.55675029754638671875;

float f = sin(var);

RESULT CPU: 0.836234211921691894531250
RESULT GPU: 0.836234271526336669921875

I have tried to do sinf() in GPU for do sin of floatbut the result is the same.

Anyone know why i am having this different?

I am running in GeForce GTX 465 with CUDA 3.20

Thanks a lot!

The decimal expansions aren’t good for analyzing floating point. You can never expect more than 7 decimal digits for floats. It’s much more informative to compare the bit representation of values to see any accuracy issues.

The significand (mantissa bits) of your two results are:

CPU: 1 .10101100001001101110010

GPU: 1 .10101100001001101110011

So notice they differ only by the last bit, a 1 ULP difference.

The programming guide says the GPU’s sin is correct to 2 ULP. The CPU is also correct to 1 or 2 ULP.

If your precision matters so much that the last bit is important at all, you shouldn’t be using single precision floats.