2D arrays host to CUDA

I am having trouble finding any examples of host to CUDA 2D array copying that show how the data is configured on the host side. Everyone shows the CUDA side but not host.

So My problem is that I am getting incorrect results when accessing my array bound to a texture. First I am defining my data on the host side as a 1D array of unsigned ints like so: (I apologize for the size of these boxes, I am not sure how to make them smaller)

[codebox]unsigned int m_data[INT_COUNT];[/codebox]

I then pass that through to my cuda code as an argument as:

[codebox]extern “C”

void Cuda_Code(unsigned int* pVolumeData, int nNumVolData)


}[/codebox] (that isn’t the real code but you get the idea)

I have a texture defined in my cuda code like so:

[codebox]texture<uint, 2, cudaReadModeElementType> volumeDataDTex;[/codebox]

I then allocate the CUDA memory, copy my host memory over to the memory and then bind the texture to the array like so:

[codebox]cudaArray* volumeTexArray = 0;

cudaMallocArray(&volumeTexArray, &volumeDataDTex.channelDesc, INT_COUNT, nNumVolData);

cudaMemcpy2DToArray(volumeTexArray, 0, 0, pVolumeData, INT_COUNT * sizeof(uint), INT_COUNT * sizeof(uint), nNumVolData, cudaMemcpyHostToDevice);

cudaBindTextureToArray(volumeDataDTex, volumeTexArray);[/codebox]

When I access the data in my kernel I do it like so:

[codebox]uint sliceP = tex2D(volumeDataDTex, index, nStepcount);[/codebox]

I do not believe I am setting up the texture correctly because the data is wrong and I can verify the data is correct on the CPU side. My understanding of how the call cudaMemcpy2DToArray works is that I just need to give it source and destination buffers and then the pitch needs to be the size in bytes of the width of the data which is also the second argument (not sure why it has both) and the the number of rows in height. I am guessing that the pitch and the width in bytes is referring to the host side layout and the height is what I want the cuda array to have (it doesn’t apparently have that information from the cudaArray?).

What am I misunderstanding? it seems I should be getting good data here but I am not. The reference guide is a bit cryptic.

For my experience, when you have a 2/3D tab, tab[y] is tex2D(y,x) and tab[y][z] is tex3D(z,y,x).

and you must declare your array according to the way the texture is read.

Hope that help,


I have tried indexing it as tex2D(x, y) and (y, x) and neither seems to be correct. The nvidia programming guide says:

tex2D(texture<Type, 2, readMode> texRef, float x, float y);

Otherwise does it look like I am using the calls correctly?

The way is read depends mostly on the channeldesc i think so maybe you must declare it as your own and not use the default texture channel wich is probably of float type and and it to the arguments in you binding call.

I post you my code to read a tab[x_size](y_size] of float that works for me

[codebox]//déclaration au niveau global de la texture

texture<float, 2, cudaReadModeElementType> tex_tab;

// Allocate CUDA array in device memory

cudaChannelFormatDesc channelDesc = cudaCreateChannelDesc();

cudaArray* array_tab;

cudaMallocArray(&array_tab, &channelDesc,y_size, x_size);

// Copie mémoire --> array sur le gpu

cudaMemcpyToArray(array_tab, 0, 0,&tab[0][0],x_sizey_sizesizeof(float),cudaMemcpyHostToDe


// Set texture parameters

tex_tab.addressMode[0] = cudaAddressModeClamp;

tex_tab.addressMode[1] = cudaAddressModeClamp;

tex_tab.filterMode = cudaFilterModePoint;

tex_tab.normalized = false;

// Bind the array to the texture

cudaBindTextureToArray(tex_tab, array_tab, channelDesc);

//lecture dans le kernel

val += tex2D(tex_tab,yi,xi);