changing texture filterMode in a kernel function?

Hi all,

is there a way to change the filterMode of a texture directly during the execution of a kernel function?

I tried a simple myTex.filterMode = cudaFilterModeLinear;

But the compiler answers “error: expression must have pointer-to-struct-or-union type”

I guess you will say me, even if it is possible it is not efficient at all, but I try!

(I’d like it because for some values I’d like my texture to be smoothed and not for other values )


– Pium

As far as I know, it is not possbile. You need to do it yourself.

Yup, it is not possible to change the filtering state from within a kernel. But you should be able to bind the same array to two texture references with different filtering modes.

As an aside, DirectX 10/11 separates the texture from the sampling state (i.e. filtering, wrap modes), so you can sample the same texture with different samplers, but this isn’t available in CUDA currently.

What does exactly happen when the array is bind as a texture reference?

If I bind my array to two texture references, will I lose a lot of memory or not?

Actually, to do what I want, for given normalized texture coordinates I compute the non-normalized coord, then I only take the integer part, then I re-normalize. So I access the texture as non interpolated. It works, but I am sure 2 texture references are more efficient!

Thank you.

No, you won’t lose any memory by having two texture references - a texture reference is just that - a reference to texture memory. The data itself is stored in the cudaArray, which is just an abstraction of the special texture memory layout in global memory.

Hi Simon,
Do you know if there is a performance penalty in using a lot of texture references in a kernel ?
I noticed that even if a texture reference is declared but not used in the kernel (because of a test based on a template parameter for instance), it is still declared in the cubin. Do you know if the next optimization phase will remove it ?


Hi Cyril, there is certainly a cost for per bound texture when launching a kernel (the driver has to update texture headers in hardware etc.) but unused texture references shouldn’t affect performance. If they do, I would consider that a bug.