[help]: compilation problem with CUDA texture CUDA texture usage

My platform is MS Win XP + VS .net 2003, and the following code piece is directly inserted into any *.cu file of CUDA SDK sample projects.

//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
texture<unsigned char, 2, cudaReadModeNormalizedFloat> tex1;

global void
func_kernel(texture<unsigned char, 2, cudaReadModeNormalizedFloat>& tex)
{
float res = texfetch(tex, 0.5, 0.5);
}

void test()
{
dim3 dimBlock(8, 8, 1);
dim3 dimGrid(1, 1, 1);

func_kernel<<< dimGrid, dimBlock, 0 >>>(tex1);

}
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////

I got the following compilation message:

Performing Custom Build Step
ptxas simpleTexture.ptx, line 64; error : Unknown symbol ‘__T22’

Who can tell me how to solve this problem? Thank you very much.

I don’t think you can pass references to textures around like that. At least, I haven’t been able to in non-emulation code.

Yes, declare the texture reference in global scope in the same compilation unit as the kernel or device function that uses it.

Mark

what if I want to have a dynamic number of textures allocated ?

something like this would obviously not work :

texture<float4, 2, cudaReadModeElementType> *texArray;

main(int nbTex)

{

   texArray  = (texture<float4, 2, cudaReadModeElementType> *)malloc(sizeof(texture<float4, 2, cudaReadModeElementType>)*nbTex);

  for(int i=0; i<nbTex; ++i)

   {

     CUDA_SAFE_CALL( cudaMallocArray( &(c->cu_arrays[i]), &(texArray[i]).channelDesc, width, height)); 

     CUT_CHECK_ERROR( "cudaMallocArray() failed" );

   }

}

/*** KERNEL ***/

__global__ void

func_kernel(texture<float4, 2, cudaReadModeElementType> *texArray)

{

    texture<float4, 2, cudaReadModeElementType> mytex = texArray[blockIdx.x+blockIdx.y*gridDim.x]

 now fetch the texture and compute....

}

so this give the same error svd2cn describe,

is there another way to do this ? or am I doing something wrong ?

thanks

Ben

Unfortunately there’s no way to do this currently - you can’t index into an array of texture references. The same restriction exists when using the graphics API.

Texture arrays (which were added in DirectX 10) offer a way of doing something like this, but they’re not exposed in CUDA yet.
http://developer.download.nvidia.com/openg…xture_array.txt

Why not use cuTexRefCreate ? Then you should be able to get pointers to texture references which you can keep in an array. You can then bind cudaArrays to them as needed.

I haven’t tried this myself, but it seems like it would work. I do something similar in brook all the time.

Thank you very much, eelsen. I will try this way.

Also, Thanks to Simon for the answer.

But, the binding will still need to be done on the host using cuTexRefSetArray(). You can’t do it in the kernel.

Also, note that you can’t mix runtime API calls with driver API calls, in general.

You can also define arrays of texture references using the runtime API through its low-level component (based on struct textureReference). You’d use cudaBindTextureToArray() to bind a CUDA array to a texture reference (see Section B.3.1.4 of the new version of the programming guide (http://developer.download.nvidia.com/compu…Guide_0.8.1.pdf)).

What method would you suggest then if we want to use an array of texture inside the kernel ?

in OpenGl it is very easy because you can loop of textureID,

but let say I want to acces to deferent texture depending on the blockIDx, how would you suggest to do that ?

Use an if statement…

if (blockid == 1)   

    val = texfetch(tex1, x, y);

else if (blockid == 2) 

    val = texfetch(tex2, x, y);

//etc...

Maybe not terribly efficient, but it would work.

well, I could have up to 256 textures…

and I’m looking for performance…

Do you have to have different textures? Why can’t you pack the different textures into one really big texture and then do some additional indexing to figure out where in the big texture you actually want to look?

I would like to do this, but there’s one problem. How would the resulting array of textureReferences be used to to texture fetches? The texfetch() overloads all take a texture as the first argument. Grepping the headers shows there isn’t any texture fetch call that takes a textureReference.

Well, there’s more than one problem, but that’s the first I see. The second problem I see is I can’t pass the array of textureReferences to a kernel because the array variable is a pointer to the array in host memory.

If I keep praying, maybe 3D textures or DX10 texture arrays will make it into CUDA 1.0.