Im working with the Reduction kernel from the SDK.
For some reason when i use Float and i add up an array of 12280 elements all of the value .0111
both the GPU and the CPU are slightly off
anyone know whats going on?
First, 136.308 probably cannot be represented in floating point.
Second, the differences between CPU and GPU are in the 8th significant figure. This is actually pretty good, as single precision floating point is technically only good out to ~6 significant figures.
I think 136.308 is in binÃ¤r “100001010001110100”
No, it is 01000011000010000100111011011001.
Your number would be 1.91008E-40.
Have a look at IEEE 754 Converter