pass texture reference as a parameter

Can I pass a texture reference as parameter to __global functions?

In mykernel.cu, define:
texture< float, 1, cudaReadModeElementType> tex;

when calling mykernel(texture< float, 1, cudaReadModeElementType>), can I call this way:
mykernel(&tex)

It seems that it doesn’t work. Whe compiling mykernel.cu, there are two warnings:

  1. warning C4047: ‘function’ : ‘__f1texture *’ differs in levels of indirection from ‘__7textureIfLi1EL19cudaTextureReadMode0EE *’

  2. warning C4022: ‘mykernel’ : pointer mismatch for actual parameter 1

What’s the appropriate way to handle texture reference as parameter?

Thank you,

I changed a little bit in the coding:

In mykernel.cu, still define:
texture< float, 1, cudaReadModeElementType> tex;

when calling mykernel(texture< float, 1, cudaReadModeElementType>& ), can I call this way:
mykernel(tex)

However, same warnings during the compiling.

During runtime, in mykernel(tex), when I call:
tex1D(tex, 1.0f)

It will crash with the following error:
First-chance exception at 0x7c812a5b in mykernel.exe: Microsoft C++ exception: cudaError at memory location 0x0012ec98

The address of tex is valid, but it is not able to yield its content.

My objective was to store constants in the texture and use them in the kernel. That way, the computation can be faster, I suppose.

How to fix this problem? Thank you,

Passing texture reference as parameter is physically impossible,
However, you can just bind different things to the same reference instead.

Why impossible?

I saw CUDA lib functions such as cudaBindTexture are using texture references as parameters.

The texture id to sample from, is hard-coded into the tex instruction, so you can’t vary it with a parameter.

CUDA lib functions can have texture references as parameter, but they are CPU functions, not GPU kernels.

cudaBindTexture is a host function, not a kernel.
The tex.xx instruction can only fetch from a constant ID. So passing a reference as parameter is obviously impossible.