texture memory - unnormalizing.

Hi Mates,

Simple question - is there some simple way to unnormalize result of data received from texture memory ?

TTFN,

Yunior

I just wrote the following test program that does just that: read a texture and unnormalize, as to end up with the original values

texture<float, 2, cudaReadModeNormalizedFloat> tex;

__device__ uint8_t clamp_u8(int16_t i)

{

    return min(max(i, 0), 255);

}

__global__ void

transformKernel( uint8_t * g_odata, int stride, int width, int height)

{

    unsigned int x = blockIdx.x*blockDim.x + threadIdx.x;

    unsigned int y = blockIdx.y*blockDim.y + threadIdx.y;

   g_odata[y*stride + x] = clamp_u8(tex2D(tex, x+0.5, y+0.5)*256.0);

}

stride is the memory difference in bytes between two successive rows , this can be different than the width in some cases for efficiency (rounded up)

If you use cudaReadModeElementType, the results from the texture fetch will be of the same type as the texture.

But sometimes you need linear interpolation and the result to be in the same range as the original. And as far as I’ve seen that only works when normalizing.

You don’t have to use normalized results to get that. Filtering works just fine on unnormalized float types.

Well, my experience is that you cannot use linear filtering if you use this:

typedef texture<float, 2, cudaReadModeElementType> RefTex;

But you can if you use

typedef texture<float, 2, cudaReadModeNormalizedFloat> RefTex;

How do you use it on unnormalized types? Mind you, my textures internal format is uint8 not floats:

cudaChannelFormatDesc channelDesc = cudaCreateChannelDesc(8, 0, 0, 0, cudaChannelFormatKindUnsigned);

I think my only option is to use normalized floats and “unnormalize” them if I want then in 0.0 … 255.0

Use float as the internal texture type. I’m using it in my code without a problem.

Are you using this channelDesc with the texture reference declarations above? (The channelDesc indicates an 8-bit unsigned type and the texture reference indicates a 32-bit signed float.) I don’t know the internal details, but I would think this would cause problems in texture fetching.

That is intended behaviour, actually it’s exactly like what happens when you fetch from a texture in OpenGL. I store the texture as uint8 in memory to save out memory and bandwidth. When you fetch a texel, it is normalized from 0…255 to 0.0…1.0.

I didn’t know you could do that. Thanks for the info.

Since there isn’t a “cudaReadModeFloat”, I suspect you’re stuck using normalized floats if float isn’t the underlying type.