very frustrating invalid texture reference problem very very frustrating

hello guys,

i’m experiencing a very frustrating invalid texture reference problem when binding a texture to an array.

the same code worked fine on the same machine under windows (which means my hardware, which is 8800GTX, should be fine). but since my coworker is using a ubuntu linux, i switched to ubuntu and saw this problem, while my coworker didn’t see this problem himself.

basically, this code binds a chunk of 3-dimensional float4 data to a texture map. the size of the data is defined by x, y and z.

texture<float4, 3, cudaReadModeElementType> flowfield;

void setCudaTexture(float4 *data, int x, int y, int z) {

	cudaExtent extent = make_cudaExtent(x,y,z);

	cudaChannelFormatDesc channelDesc = cudaCreateChannelDesc<float4>();

	cutilSafeCall( cudaMalloc3DArray(&volume, &channelDesc, extent));

	cudaMemcpy3DParms copyParams = {0};

	copyParams.srcPtr = make_cudaPitchedPtr((void*)data, extent.width*sizeof(float4), extent.width, extent.height);

	copyParams.dstArray = volume;

	copyParams.extent = extent;

	copyParams.kind = cudaMemcpyHostToDevice;

	cutilSafeCall(cudaMemcpy3D(&copyParams));

	flowfield.normalized = 1;

	flowfield.filterMode = cudaFilterModeLinear;

	flowfield.addressMode[0] = cudaAddressModeClamp;

	flowfield.addressMode[1] = cudaAddressModeClamp;

	flowfield.addressMode[2] = cudaAddressModeClamp;

	cutilSafeCall(cudaBindTextureToArray(flowfield, volume, channelDesc));  // this line gives the error

	texScale.x = 1.f/extent.width;

	texScale.y = 1.f/extent.height;

	texScale.z = 1.f/extent.depth;

}

i googled out a similar problem: The Official NVIDIA Forums | NVIDIA

but i don’t think it is a same problem, because i’m pretty sure that i only bind the texture reference once.

please help.

thank you.

more information about my machine and my coworker’s machine

mine: ubuntu 10.10 8800gtx (doesn’t work) windows xp (work)

my coworker’s: ubuntu 9.10 GTX480 (work) GTX285(work)

I tried but couldn’t reproduce the error !

How big is the array you are binding to the texture? The obvious difference in hardware between the work and dont’t work case is memory size. The 8800GTX only has 768Mb, the others cards have at least 1Gb (and the windows XP versus Ubuntu thing on the 8800GTX machine could be down to CUDA context and display manager memory usage differences, especially if it is 64 bit Ubunutu and 32 bit XP).

the size is very small. the actual data is 646464, and i even tried a 161616 data, still not working.

btw, I’m using the newest cuda sdk.

ya, that’s why it is a frustrating problem . it’s only happening on my machine under linux.

here is a little update:

i down grade my linux to ubuntu 9.10, the same version used on my coworker’s machine. but there is still the same problem. invalid texture reference.

what should i do?

Whatzz the driver and CUDA Version? CUDA3.2 and 260.99?

At least at prev sdk, texture references should be declared in one file. you need to write wrapper c functions to deal with binding etc. Just do not use texture reference in other file.