Simple question on texture memory


I use data in read only access mode and I guessed I could use texture memory to do that.

My idea is to use texture like an array, but with cache.

But it doesn’t work at this time.

Texture is declared like that :

texture<float, 1, cudaReadModeElementType> tex;

Data array is allocated and filled like that :

cudaMalloc( (void**) &d_population, sizeof(float)*dimX*dimY);

 cudaMemcpy( d_population, pop , dimX*dimY*sizeof(float*),cudaMemcpyHostToDevice);

(pop and d_population are 1D array)

I bind the texture this way :


Finally in the kernel, I use the texture like that :

float value = tex1Dfetch(tex,0);

But the value is not what I expect, I get some random value and as I’m beginner in cuda, I guess there is a problem in my methodology.

I try with cudaArray, but without more result.

I run my tests in device-emulation but I guess that texture is emulated too.

Did I do something wrong ? Any idea ?

Many thanks

With regards


Only the first parameter is required, the other two are the defaults anyway:

texture<float> tex;

Just declare d_population to be a void*, you then won’t need any type-casting on it. In the second line, sizeof(float*) should be sizeof(float):

cudaMalloc(&d_population, sizeof(float)*dimX*dimY);

 cudaMemcpy( d_population, pop , dimX*dimY*sizeof(float),cudaMemcpyHostToDevice);

This is where you are actually breaking your texture: The fourth parameter is the channel descriptor, not the texture size. Just leave it out, CUDA will derive the descriptor from your texture reference:


That last line seems correct.

Your binding call is incorrect: second argument must be the texture handle, the third argument must be the pointer to the device memory, you don’t really need the fourth one.


Many thanks to both of you, for your short and accurate replys.
Now it works (not very faster but this is another problem) but it works.

Just for chatting :
I don’t change the type of d_population to void* because I prefer to keep the real type of pointers but I think that doesn’t change anything in this case.
The sizeof(float*) was a stupid error, but luckily for me I have a 32bits architecture. Thanks for this too.