Based on several previous posts, it is not possible to have an array of textures that kernel functions can index into. My application needs to convolve an image with thousands of different, small patches. From what I can tell, my options are:
(1) Don’t use textures at all. Undesirable, as textures are very, very useful for this task, and other than this one oversight, very well supported.
(2) Join all the patches into one big texture. Not possible, due to texture size limitations.
(3) Use branching in the kernel code. Not really practical, as this would incur a massive performance hit due to divergent warps.
(4) A variant of #2 that makes multiple, sequential kernel invocations, each time stuffing as many patches into a single texture as the size limitations allow. This is probably what I’ll have to do, barring…
(5) Something fantastically clever that one of you thinks of.
(6) Wait for NVIDIA to remove this limitation.
I’d be thrilled if anybody could fill in #5! Otherwise, would NVIDIA care to comment on a timeframe for #6?