Correct me if I’m wrong, but what you have is an image stored with 8-bits for red, 8-bits for green and 8-bits for blue and you want to read this in the “normal” graphics mode where you get a floating point number from 0-1 for each channel?
If so, then you want the readNormalizedFloat mode for the texture. You will need to pad your data when you load it, so the last 8-bits of the 32-bit number is there (the alpha channel in graphics speak), even if you don’t use it.
Disclaimer: I haven’t used textures in this way yet, so the code is entirely based on what I read in the programming guide for setting it up.
Declare your texture reference like so:
texture<uchar4, 2, cudaReadModeNormalizedFloat> tex_ref
Note: I don’t know if you want filtering, wrapping or normalized coordinates, so I leave the options as default. See section 184.108.40.206 on how to set these things up.
To allocate, set filtering options, and bind the array, all you need is the following.
cudaMallocArray(&cuArray, &tex_ref.channelDesc, width, height);
// set tex_ref.normalized, filterMode and addressMode here to your liking
Obviously, you need to copy the data to the array sometime before you try to read it too :)
I’ll also point out to you that the cudaArray method of texturing is really only needed if your kernels access the image with 2D-local patterns (or if you want all the fancy wrapping/normalized addressing modes). If you have an extremely regular memory access pattern across the rows of the image, a straight coalesced global memory read is the way to go. Or, if you really aren’t taking full advantage of the 2D cache offered by the cudaArray, you could get by with the simpler 1D textures read directly from device memory, which have their advantages too.
Edit: silly code tags didn’t work