cudaBindTexture and cuBlas use of cudaBindTexture


In the cublas routines, binding textures is done via:

    if ((cudaStat=cudaBindTexture (&texXOfs,texX,x,sizeX*sizeof(x[0]))) !=
        cudaSuccess) {

(4 arguments)

However, the documentation of cudaBindTextures clearly states that the
arguments are:

cudaError_t cudaBindTexture(size_t* offset, const struct textureReference* texRef, const void*
devPtr, const struct cudaChannelFormatDesc* desc, size_t size = UINT_MAX);

(5 arguments)

Could someone please explain to me why the cublas routines work? Thanks.


argument size(of type size_t) has been assigned a default value,hence the last(5th) argument becomes an optional argument that need not be used,a really basic feature of most high level programming languages

I do not agree. The fourth argument shoudl be a cudaChannel Descriptor.
The fourth channel in cuBlas is an integer, which is the size. So something is
not quite correct.


The runtime API (as opposed to the C runtime) has a few convenience functions. One of them uses the type of the texref to set the channel descriptor. See the following function signature in the programming guide 1.1 at D.6.2.2

template<class T, int dim, enum cudaTextureReadMode readMode>

static __inline__ __host__ cudaError_t

cudaBindTexture(size_t* offset,

const struct texture<T, dim, readMode>& texRef,

const void* devPtr,

size_t size = UINT_MAX);

Thank you MisterAnderson42. That makes sense. I had looked at the programming guide for such information, but I could not find it. Thanks!