Double vs. Float

I write code in CUDA by using Float and then by using Double as the data type. The output result is slightly different from these two data types. Also, the output result is different when I write the code in C. Is there any reason for having different output when changing the data type in CUDA?

float and double will often produce slightly different results on other platforms besides GPU. You should probably learn more about what these datatypes mean for calculations in C/C++ in general.

There are often differences between host and device floating-point calculations. There may be a lot of reasons for this, from user error, to library implementation differences, to characteristics of floating point calculations (order of operations) when considered in a parallel environment. You can find many questions asking about such differences with a bit of searching.

No, there is no reason for the results being different. The “double” data type was just invented as a simple means to slow down floating point computations by a factor anywhere between 2 and 32. But somehow the hardware designers got it all wrong and introduced tiny incompatibilities that no-one has ever bothered to track down to their origin.

You may have more luck by dropping “double” altogether and exclusively using “float” only, and achieving the slowdown by some other means, like extra empty loops causing the desired delay.

my sarcasm detector just overflowed after reading tera’s post.