Why uint8 or int16 data be bound as floating point texture?

According to the programming manual,

So, the following won’t work:

texture <float, 2> g_texture(0, cudaFilterModePoint, cudaAddressModeClamp);

cudaChannelFormatDesc desc = cudaCreateChannelDesc <int16_t>();

cudaBindTexture2D(NULL, &g_texture, dataGPU(), &desc, 256, 256, dataGPU.Pitch());

Is this a hardware limitation? As far as I know, even graphics boards from 10 years ago can do bilinear interpolation of uint8 data, but probably with fixed point interpolation instead of float interpolation on CUDA hardware. Is it that expensive to do int to float conversion?

Yes, you can do this. To get floats returned from an integer type texture just set the ReadMode to NormalizedFloat:

texture <short, 2, cudaReadModeNormalizedFloat> g_texture;

Yes, you can do this. To get floats returned from an integer type texture just set the ReadMode to NormalizedFloat:

texture <short, 2, cudaReadModeNormalizedFloat> g_texture;