sin: Lack of precision?

I am in Debug-mode right now. And I wanted to calculate:

sin(56)

On the Vista “Calculator”(Rechner) and in Linux, I got this result:

-0,5215510020869118801874100215106

But in the Debug Mode, I got the following results(Where asciimot==56):

[b]sinf(56) -0.521551012992859

sinf(__int2float_rn(asciimot)) -0.521551012992859

sinf(__int2float_rz(asciimot)) -0.521551012992859

sinf(__int2float_ru(asciimot)) -0.521551012992859

sinf(__int2float_rd(asciimot)) -0.521551012992859

sin(56) -0.521551012992859

sin(__int2float_rn(asciimot)) -0.521551012992859

sin(__int2float_rz(asciimot)) -0.521551012992859

sin(__int2float_ru(asciimot)) -0.521551012992859

sin(__int2float_rd(asciimot)) -0.521551012992859

__sinf(56) -0.521551012992859

__sinf(__int2float_rn(asciimot)) -0.521551012992859

__sinf(__int2float_rz(asciimot)) -0.521551012992859

__sinf(__int2float_ru(asciimot)) -0.521551012992859

__sinf(__int2float_rd(asciimot)) -0.521551012992859[/b]

=>

Calculator:-0,5215510020869118801874100215106

DebugMode:-0.521551012992859

I have a GTX+9800. Is this a result of the lack of Double-Precision?

How could I solve this problem?

IEEE compliant single precision is only accurate to about 6 or 7 decimal places. Those results looks perfectly normal to me. If you want more accuracy on compute 1.0/1.1 gpus, then you are going to have to implement something with higher precision yourself.

This question has been answered hundreds of times. Single-precision on the GPU is not IEEE compliant. Expect minute differences.
Double precision on GT200 is IEEE compliant.

EDIT: I didn’t see the previous post when I wrote mine.

It’s also worth taking a moment to consider if this is a problem at all. What are the precision requirements of your application? Unless the Vista calculator is using some kind of quad precision arithmetic, all the digits after the 15th decimal place are wrong for that result as well.

Also, I notice you are in debug mode, which means the CPU is doing the calculation and not the GPU. In that case, you are looking at differences in how your host compiler handles float and double variables. Same issue applies. (What Mr_Nuke is talking about is that even if you compare single-precision float on the GPU to single-precision float on the CPU, you’ll get slightly different answers as well, again in the 6th or 7th decimal place.)