problems setting up a linear memory texture using the driver API.

Hi,

I’ trying to get my kernel to access a block of linear memory via texture lookups. The same CUDA code was working when I was using the runtime API, but now that I’ve switched to using the driver API, it’s stopped.

There isn’t an example of texturing from linear memory in the CUDA documentation (as far as I can see) and googling for code snippets that use cuTexRefSetAddress() doesn’t turn up anything useful.

The texture is defined in the .cu file as:

[codebox]

texture<uint4,1,cudaReadModeElementType> hash2;

[/codebox]

I set up the texture, and the memory that backs it like this:

[codebox]

CUdeviceptr d_hash2_mem;

void setup(…, uint32_t *hash2_mem, size_t hash2_sz, …) {

CUtexref t_hash2;

failOnCUDAErr(cuMemAlloc(&d_hash2_mem, hash2_sz * 4));

failOnCUDAErr(cuMemcpyHtoD(d_hash2_mem, hash2_mem, hash2_sz * 4));

failOnCUDAErr(cuModuleGetTexRef(&t_hash2, mod, “hash2”));

failOnCUDAErr(cuTexRefSetFormat(t_hash2, CU_AD_FORMAT_UNSIGNED_INT32, 4));

failOnCUDAErr(cuTexRefSetFlags(t_hash2, CU_TRSF_READ_AS_INTEGER));

failOnCUDAErr(cuTexRefSetAddress(NULL, t_hash2, d_hash2_mem, hash2_sz * 4));

}

[/codebox]

And then the kernel reads the texture as follows:

[codebox]

tex1Dfetch(hash2, index);

[/codebox]

The texture read is always returning 0, whereas I know the first component of the data should be 0xffffffff.

As an additional check, I’ve stored a pointer to d_hash2_mem so that it’s accessible to the kernel, and when I read the memory directly via that pointer the value is correct. This makes me think that it must be something I’m missing in the texture setup.

Cheers,

Toby.

Solved. Although it’s not stated explicitly in the docs, you need to call cuParamSetTexRef() in order to (I presume) bind the texture to a texture unit before executing a kernel.