Driver API and cudaReadModeNormalizedFloat

I defined a texture reference as follows:

static texture<unsigned char, 2, cudaReadModeNormalizedFloat> tex;

and I’d like to bind a CUarray to the texture reference using the driver API:

   CUtexref texref;

    CU_SAFE_CALL(cuModuleGetTexRef(&texref, module, "tex"));

    CU_SAFE_CALL(cuTexRefSetArray(texref, cuarray, CU_TRSA_OVERRIDE_FORMAT));

    CU_SAFE_CALL(cuTexRefSetAddressMode(texref, 0, CU_TR_ADDRESS_MODE_WRAP));

    CU_SAFE_CALL(cuTexRefSetAddressMode(texref, 1, CU_TR_ADDRESS_MODE_WRAP));

    CU_SAFE_CALL(cuTexRefSetFilterMode(texref, CU_TR_FILTER_MODE_POINT));

    CU_SAFE_CALL(cuTexRefSetFlags(texref, CU_TRSF_READ_AS_INTEGER));

    CU_SAFE_CALL(cuTexRefSetFormat(texref, CU_AD_FORMAT_UNSIGNED_INT8, 1));  

cuarray is a CUarray with format CU_AD_FORMAT_UNSIGNED_INT8.

However in my kernel the texfetches doesn’t return anything. It works only if I use cudaReadElementType instead of cudaReadModeNormalizedFloat or when using the runtime API to bind an array to the texture reference.

What am I doing wrong?

The format specified when binding a texture to a texture reference must match the parameters specified when declaring the texture reference; otherwise, the results of texture fetches are undefined.

I suspect this is what you’re hitting here since you set the read mode at bind time with cuTexRefSetFlags. The read mode you set at bind time should match the read mode specified when declaring the texture reference.

We’ll clarify this point in the next version of the programming guide.


Thanks a lot! I completely overlooked that I set the CU_TRSF_READ_AS_INTEGER Flag.