Double vs. Float

I write code in CUDA by using Float and then by using Double as the data type. The output result is slightly different from these two data types. Also, the output result is different when I write the code in C. Is there any reason for having different output when changing the data type in CUDA?

float and double will often produce slightly different results on other platforms besides GPU. You should probably learn more about what these datatypes mean for calculations in C/C++ in general.

There are often differences between host and device floating-point calculations. There may be a lot of reasons for this, from user error, to library implementation differences, to characteristics of floating point calculations (order of operations) when considered in a parallel environment. You can find many questions asking about such differences with a bit of searching.

1 Like

No, there is no reason for the results being different. The “double” data type was just invented as a simple means to slow down floating point computations by a factor anywhere between 2 and 32. But somehow the hardware designers got it all wrong and introduced tiny incompatibilities that no-one has ever bothered to track down to their origin.

You may have more luck by dropping “double” altogether and exclusively using “float” only, and achieving the slowdown by some other means, like extra empty loops causing the desired delay.

1 Like

my sarcasm detector just overflowed after reading tera’s post.

1 Like

tera’s answer will cause another space shuttle disaster one day

1 Like

to be fair, Space Shuttle never had a disastrous software issue. It was Ariane 5 during its maiden flight that had some kind of numeric overflow.

not yet lol… but there was a satellite that crashes due to units od measurement differenc3