Embedding a texture in a class

Has anyone devised a way to embed a texture in a class? The documentation states that texture references usually have to go at file scope. This is going to rapidly get painful when used in the code I’m currently developing. I’d like to be able to do something like

class TextureClass {


  texture<float, 1, cudaReadModeElementType> dtl_myTex;

  float *d_vals;

TextureClass( float* lookupArray ) {

	// Copy to d_vals, bind to dtl_myTex


__device__ ReadTexture( const float i ) {

	return( tex1Dfetch( i+0.5 ) );



I’d then pass instances of TextureClass to kernels, which would then be able to use the embedded look up array. Suggestions gratefully received.

I haven’t. I have “sort of” circumvented some of the worst of the file scope problems on the host side by passing around function pointers to wrapper functions that are in the correct scope , but it is less than optimal. On the device side, I haven’t worked out a way around it, given that function pointers are verboten. Maybe Fermi will ease that restriction and make life easier, but on current hardware I haven’t worked out anything better.

I don’t get how anyone can use a texture at all unless the host and device code is all on one file (a ridiculous requirement)

It is precisely the same requirement kernels also have in the runtime API, and for the exact same reasons. I explained how to work around it here. Do you think complaining about this in three separate threads simultaneously is enough or should we move on to a fourth?

There is a very simple and effective way to handle ‘textures’ that are allocated in pure c++ files (or different .cu compilation units) from the texture reference. The outside code simply allocates and manages the pointer/cudaArray and passes that to the driver function that calls the kernel. The driver function binds the pointer/cudaArray to the texture reference and then calls the kernel. Binding is extremely cheap and does not slow performance even if performed before every kernel launch. All scope issues are trivially satisfied because the kernel driver function must be in the same compilation unit as the actual kernel using the texture.

But that still means that each kernel has to ‘know’ which texture to use, rather than accepting it as an argument. And with the kernels and textures templated, it’s rapidly becoming a huge mess. Plus, if these templates start going into header files (which I’d like) then there’s the extra fun of making sure each texture name is both typed and globally unique.

That goes without saying. At the moment, the compiler has to do inline expansion of texture references and reads into assembler to provide the hook to launch the texture unit action. Without function pointers or a linker, it can’t be done for the same reasons that device functions must be inlined. The problem is effectively the same. But you can pass around a wrapper function which does the bind to a given kernel’s input textures and then calls the kernel. That is about as far as the current device compilation model can go.

Sorry for irritating you by posting on multiple threads on a seemingly similar topic.

There are 2 separate issues here:

  1. a discussion of texture reference restrictions (CUDA API design)

  2. My own problem in trying to find a workaround (application development)

If the distinction is trivial to you, I apologize again.

Please bear with me, I’m going to have to get all Homer Simpson on you. Could you please dumb down your explanation or otherwise provide a snippet of code to illustrate your point? Thanks!! :sweat:

-EDIT- @MisterAnderson: avidday helped me with an example in another thread. I will learn from the example he posted and come back if i have more problems.

-EDIT- Yup, function wrapper works. Thanks!