hello guys,
i’m experiencing a very frustrating invalid texture reference problem when binding a texture to an array.
the same code worked fine on the same machine under windows (which means my hardware, which is 8800GTX, should be fine). but since my coworker is using a ubuntu linux, i switched to ubuntu and saw this problem, while my coworker didn’t see this problem himself.
basically, this code binds a chunk of 3-dimensional float4 data to a texture map. the size of the data is defined by x, y and z.
texture<float4, 3, cudaReadModeElementType> flowfield;
void setCudaTexture(float4 *data, int x, int y, int z) {
cudaExtent extent = make_cudaExtent(x,y,z);
cudaChannelFormatDesc channelDesc = cudaCreateChannelDesc<float4>();
cutilSafeCall( cudaMalloc3DArray(&volume, &channelDesc, extent));
cudaMemcpy3DParms copyParams = {0};
copyParams.srcPtr = make_cudaPitchedPtr((void*)data, extent.width*sizeof(float4), extent.width, extent.height);
copyParams.dstArray = volume;
copyParams.extent = extent;
copyParams.kind = cudaMemcpyHostToDevice;
cutilSafeCall(cudaMemcpy3D(©Params));
flowfield.normalized = 1;
flowfield.filterMode = cudaFilterModeLinear;
flowfield.addressMode[0] = cudaAddressModeClamp;
flowfield.addressMode[1] = cudaAddressModeClamp;
flowfield.addressMode[2] = cudaAddressModeClamp;
cutilSafeCall(cudaBindTextureToArray(flowfield, volume, channelDesc)); // this line gives the error
texScale.x = 1.f/extent.width;
texScale.y = 1.f/extent.height;
texScale.z = 1.f/extent.depth;
}
i googled out a similar problem: The Official NVIDIA Forums | NVIDIA
but i don’t think it is a same problem, because i’m pretty sure that i only bind the texture reference once.
please help.
thank you.
more information about my machine and my coworker’s machine
mine: ubuntu 10.10 8800gtx (doesn’t work) windows xp (work)
my coworker’s: ubuntu 9.10 GTX480 (work) GTX285(work)