I just started learning CUDA thus I’m not very familiar with it.
I was wondering if there is a possibility to figure out what the limits of types are (exp: MAX_INT). I’m not fond of using magic numbers like 32,767 since this might be different on different architectures.
If you’re wondering why i need this, the answer is the following. I’d like to have a synchronized/atomic add for floats. My idea was to use integers and then eventually divide to int by MAX_INT.
I don’t think the types on the GPU are different to the CPU ones. A float in the kernel is still a 4 byte float, with the same limits as the CPU version (same goes with ints, chars etc).
This is true in general, but you can make much more strict claims on the GPU<->CPU comparison. In fact, in order for memcopied blocks data to make sense on the GPU and CPU, sizeof(type) (and alignment of the type) has to be the same on both devices.
limits.h should define the correct values for any particular target platform. The CUDA values need to match the host architecture for the reasons mentioned by seibert. The typical values shown at that link should apply to PCs and Intel Macs, and therefore to CUDA.