Image processing in floating point?

Many of the examples I am seeing for performing image processing in CUDA, do their processing in floating point (e.g. convolutionSeperable takes a floating point image). Is this always recommended? I’m used to keeping things in integer space when performing on the CPU. Should i always convert my image into floating point before sending to CUDA? What about color images, should they be sent as 3 float per pixel? I also notice the histogram sample does not do this, everything is sent as bytes.

Yes, since floating point computation is essentially the same speed as integer on the GPU (and conversions are fast) we often use floating point for convenience.

There’s no reason not to use integers, but you should be aware that if you use full 32-bit integer multiplies these take 16 clock cycles, whereas 24-bit integer multiplication using the __mul24 intrinsic takes only 4 cycles. See the programming guide for more details.

You might want to use integer if you need perfect reconstruction (lossless) transforms, or strict compatibility with some specification. Otherwise floating point usually gives better precision. In speed there is really no difference.