invalid texture reference

I only have a single .cu file.

I have declared texture as global and static and outside main{} function as:-

static texture<float,3,cudaReadModeElementType> tex;

Now inside main i have loaded a 3D Binary Image in a 1D array(h_image) in host memory and then copied it into the device memory using following code.

//To Load the Same Image in Device Memory in cudaArray.
cudaArray* d_image;

// setup texture dimension
cudaExtent extent;
extent.width = width;
extent.height = height;
extent.depth = depth;

// texture channel descriptor
cudaChannelFormatDesc channelDesc = cudaCreateChannelDesc();

// memory allocation for texture, on GPU, allocates GPU memory as 3D cuda memory
cudaMalloc3DArray(&d_image, &channelDesc, extent);

// copy from host memory to device memory
cudaMemcpy3DParms copyParams = {0};
copyParams.srcPtr = make_cudaPitchedPtr((void*)h_image, height*sizeof(float), height, depth);
copyParams.dstArray = d_image;
copyParams.extent = extent;
copyParams.kind = cudaMemcpyHostToDevice;

//copy from host memory to device memory
safecall( cudaMemcpy3D(&copyParams), “cudaMemcpy3D” );

// set texture parameters
tex.normalized = true;
tex.filterMode = cudaFilterModeLinear;
tex.addressMode[0] = cudaAddressModeWrap;
tex.addressMode[1] = cudaAddressModeWrap;
tex.addressMode[2] = cudaAddressModeWrap;

// bind cudaArray to the texture
cudaBindTextureToArray(tex, d_image,channelDesc);

cuda_kernel<<<64,64*64>>>();

NOW

The code compiles but when i run it it gives me error as “invalid texture reference.”

I dont know what am i doing wrong here.

I am loading 3D image into 1D array in host then copying it into device. Are my steps correct for this?

Also we can directly fetch values from cudaArray (i read it in this forum iteself), so we have to bind it to texture so i was trying to do the same and its the same thing giving error. What is wrong here?

Also after binding, i am confused about how do i write my kernel to loop through the values in the texture.?

Please help me, i am stuck here.

On giving a second thought… can i just directly use 3D cudaArray without binding it to any texture. I am confused… i think its not possible. IS IT ?

Also i am doing this to achieve 3D sobel filtering in the 3D image i load cudaArray.

Am i doing it correctly… am i following the right steps…

i cant find any appropriate document regarding handling 3D data and performing operation in nvidia.com or any other place…

so any suggestion is highly appreciated…

thanks and help me out here… i m stuck… :ermm: :( :o :argh:

anyone??? :(

I don’t remember having to put static in front of texture declaration. Have you tried without?

Hi there,

Thanks for the reply

Yah i have tried it without static as well but it gives me the same error. Any other clues??

are there any chance of version problem for this error:

when i enter nvcc --version it gives me following output

nvcc: NVIDIA ® Cuda compiler driver
Copyright © 2005-2009 NVIDIA Corporation
Built on Thu_Apr__9_05:05:52_PDT_2009
Cuda compilation tools, release 2.2, V0.2.1221

Does this version support 3D texture.

If not… are there any references available which can give me an idea how to split 3D texture into series of 2D texture and work on it?

It should, AFAIR 3D textures are in since 2.0. I’ve never used 3D textures so I can’t help. Have you tried going through the example in SDK step by step?

Don’t know if it matters, but have you declared the texture in the .cu file or in the …_kernel.cu file?

I only have one .cu file for now… just one file and i am coding everything inside the same cu file.

I am compiling it like nvcc test.cu

that’s all…

am i missing something??

my file compiles well but when i try to run a.out ( general case), it gives me this error.

any clues??

E-mail the code to wanderine@hotmail.com and I can try the code at work tomorrow.

sure … i will mail u soon…