compile error for 2D normalized texture

I am experiencing the same issue as reported in

briefly, I first declared a 2D texture:

texture<float2, 2, cudaReadModeNormalizedFloat> tS;

		float2 *gS;

		cudaMalloc((void **) &gS, sizeof(float2)*(DIMX)*(DIMY));

		cudaChannelFormatDesc f2tex = cudaCreateChannelDesc<float2>();

		cudaBindTexture2D(0,tS, gS,f2tex,DIMX,DIMY,DIMX);

then inside the kernel, I used

float2 ss=tex2D(tS,idx*rpix.x,idy*rpix.y);

to read the texture (idxrpix.x and idyrpix.y is the normalized coordinates), then I compile the code, nvcc complained

error: no instance of overloaded function "tex2D" matches the argument list

		   argument types are: (texture<float2, 2, cudaReadModeNormalizedFloat>, float, float)

if I change cudaReadModeNormalizedFloat to cudaReadModeElementType, nvcc works; if I change float2 to uchar or float, nvcc compiles ok too.

is this a bug?

after reading the programming guide more carefully, I realized that linear-memory binded texture does not support 2D addressing. I guess I need a cudaArray.

another question, when I specify cudaReadModeNormalizedFloat, can I use normalized coordinates to retrieve the absolute texture values? In section of the programming guide, it seems that cudaReadModeNormalizedFloat will also make the return value normalized to 0~1. (or -1~1 for unsigned).

it’s too bad. I changed everything to cudaArray and bind the texture to it, nvcc still does not compile, giving the same error message.

anyone can help?