Hi to all,
I have a little silly question concerning precision.
Assuming I have a device with c.c. 1.3.
When I do a calculation on GPU, using double functions (i.e. sin()) and both double and float values, I’m using 64-bit math.
If I use function like sinf() and both double and float values I’m using 32-bit math…
If I compile my software using --use_fast_math, or I use functions like __sinf() I use 24-bit math…
Is this all right?
Are there significant differences in therms of performances (I’m not considering memory bandwith) just switching from float to double values?
Thanks to all,