I cannot find a straight answer about how CUDA actually deals with integers. I have read that GPUs support only single-precision floating-point numbers: no integers and no doubles. However, CUDA supports integers freely, even on device code. How is this possible? Does the CUDA compiler insert some sort of conversion code to create floating points? If so, it is my understanding that some precision must be lost.
I am concerned because I need to use XOR, so a conversion to a floating point number with loss of precision would not be sufficient.
Anyone have some light to shed?