I cannot find a straight answer about how CUDA actually deals with integers. I have read that GPUs support only single-precision floating-point numbers: no integers and no doubles. However, CUDA supports integers freely, even on device code. How is this possible? Does the CUDA compiler insert some sort of conversion code to create floating points? If so, it is my understanding that some precision must be lost.
I am concerned because I need to use XOR, so a conversion to a floating point number with loss of precision would not be sufficient.
And it is important to keep in mind that the 8800 is 4x slower at 32-bit integer multiplication than floating point multiplication. Integer addition and bit operations are the same speed as floating point addition and multiplication, though.