An array of texture references?

Is there a way to create an array of texture references? For example:

texture<float, 2, cudaReadModeElementType> tex_array[ARRAY_LEN];

I get the following error when I define the above statement:
error LNK2001: unresolved external symbol ___cxa_vec_ctor

thanks

i also had unresolved externals several times, and solved this by additionaly linking cudart.lib

But it’s just a guess.

I don’t believe it is possible to do directly. (Search for other posts on the subject for more details). I implemented a texture array by creating C macros like this:

#define arraytexFetch(_texbasename, _tu, _tv, _texnum, _return)\

{\

	switch(_texnum)\

	{\

	case 0:\

  _return = tex2D(_texbasename##00, (_tu), (_tv));\

  break;\

	case 1:\

  _return = tex2D(_texbasename##01, (_tu), (_tv));\

  break;\

	case 2:\

  _return = tex2D(_texbasename##02, (_tu), (_tv));\

  break;\

	case 3:\

  _return = tex2D(_texbasename##03, (_tu), (_tv));\

  break;\

}

I use this macro inside kernels. I created other macros to hide allocation, copying, and other details. There are some minor drawbacks, but it’s much simpler to maintain until we get real texture arrays.

Thanks for the quick response Jimh.

Unfortunately the macro trick won’t solve my problem. I don’t know at compilation time the number of texture references that I need. I was going to allocate an upper-bound (ARRAY_MAX) and then initialize as many as needed.

In fact if 3D textures can be accessed via CUDA as suggested in other threads, this would solve my problem too. Can anyone from NVIDIA comment if 3D textures will become available at some stage in the future?

Yes, 3D textures will be supported in a future release. Probably after CUDA 1.1, though.

I don’t know how big you need your slices to be, but you can allocate a pretty darn big 2D texture with CUDA and then just use some offsetting to store the different planes in the same 2D texture. “Flat 3D textures”. Mark first suggested using them in a fluid simulation he wrote a while back, I think.

I have the same problem - I don’t know the total size until runtime. Fortunately, the biggest size I need is still small enough that I can allocate the max without running out of memory. I considered writing code that only allocates and binds enough cudaArrays to cover the data size, but that complicated the code enough to not be worthwhile for me.

I created similar macros to implement 3D textures. In fact, most of the datasets in my algorithm are 3D volumes flattened into 4 2D textures and accessed using these “3D array” macros.

When we get 3D textures (or texture arrays) I’ll remove my macro code.

I’m anxious to get real 3-D textures so I can get trilinear filtering without exploding my reg count…

Increased register count is one of the drawbacks to my 3D texture macros. Fortunately I only need in-plane bilinear interpolation, so I can use the texture unit instead of wasting more registers. In any case, 3D textures would make life easier for me, too.

Yes. That’s what I am doing right now for 3D structures but for that to work I need to allocate a 2D cuda array of width = xdim and height = ydim*zdim. What I soon realized was that the limit on the height (2^15) and not the size of 3D structure or available memory was limiting the usability of the method.

I ran into the same problem. It’s exactly why I put my data into 4 textures and created the macros to access them.

How important is it to you that 3D textures work in emulation mode?

It’s possible we could release this sooner if we don’t have to implement the software path.

Hi,
I would also be very happy to see 3D texture access to appear soon in CUDA, and the most important for me is to be able to use it in hardware. No problem if it is not supported in emulation mode in a first time :-)

I could handle not having 3D textures in emulation if it meant an earlier release. I don’t use emulation mode much. If I needed it, I could always write some code inside a DEVICE_EMULATION define to get around it.

You could be a lot more clever about how you pack the slices into the available 2D space…unless your xdim happens to be bigger than 2^15. I believe the max width is 2^16 :)

I also don’t use emulation mode much.
By the way, will the DX10 “texture array” be also supported? I’m of the expression it behaves very like 3D texture.

Simon,

I don’t need emulation, I’m happy to work just with hardware!

John

I would be absolutely content with 3D texture in hardware only.

By the way any indication when the next release of CUDA is scheduled? (sorry if I am a bit pushing it!)

I wouldn’t mind. Especially given that it already works using PTX… :)

Peter

I wouldn’t mind either if you exposed some more hardware features directly, but did the emulation later, after all being able to emulate is isn’t that important.