CUDA integer ops in hardware the skinny on ints in CUDA and hardware

Hello all,

I cannot find a straight answer about how CUDA actually deals with integers. I have read that GPUs support only single-precision floating-point numbers: no integers and no doubles. However, CUDA supports integers freely, even on device code. How is this possible? Does the CUDA compiler insert some sort of conversion code to create floating points? If so, it is my understanding that some precision must be lost.

I am concerned because I need to use XOR, so a conversion to a floating point number with loss of precision would not be sufficient.

Anyone have some light to shed?
Thanks!

CUDA runs only on GF 8800 and following. These cards do integers in hardware.

Peter

And it is important to keep in mind that the 8800 is 4x slower at 32-bit integer multiplication than floating point multiplication. Integer addition and bit operations are the same speed as floating point addition and multiplication, though.

And if your two operands are less than 24 bits, you can use the __mul24 function, which I believe operates at full speed (as fast as float).