I’ve been studying the manual about cuda textures, few things are unclear:
channelDesc “describes value that is returned”. Is this independent of the source format of the texture? Is it describing the value returned to me, or the format of the source data?
Linear filtering, I’m unclear when this actually works. It needs to use cudaarray allocated memory, but:
-Under the filtering section it says “is only available for floating-point textures”.
-But under filtermode, it says “cudaFilterModeLinear is only valid for returned values of floating-point type”. It doens’t say the texture must be floating point source. The GPU certainly supports filtering non-floating point textures so I don’t understand why cuda would impose this limitation.
The texture<Type, Dim, ReadMode> has some confusing descriptions.
It says “Type” “data that is returned when fetching the texture”. This seems to do the same thing as channelDesc. I think what this really means is it’s how fetched data is interpreted. The actual returned value comes from channelDesc, so a conversion can take place if they are different?
In that case, is it valid to have say a Type of uchar4, but channeldesc is 32-bit per component “cudaChannelFormatKindFloat”? I’m trying to do something very simple - read an 8:8:8:8 RGBA texture as a cuda texture with linear filtering enabled, and write it out to another RGBA texture.
I looked at the functions defined in texture_fetch_functions.h and there seem to be explicit conversions going on from float to int and vice versa. For example:
template<> inline device int4 tex2D(texture<int4, 2, cudaReadModeElementType> t, float x, float y)
int4 v = __itexfetch(t, make_float4(x, y, 0, 0));
return make_int4(v.x, v.y, v.z, v.w);
static inline device float4 tex2D(texture<uchar4, 2, cudaReadModeNormalizedFloat> t, float x, float y)
uint4 v = __utexfetch(t, make_float4(x, y, 0, 0));
float4 w = make_float4(__int_as_float(v.x), __int_as_float(v.y), __int_as_float(v.z), __int_as_float(v.w));
return make_float4(w.x, w.y, w.z, w.w);
This last function seems to take a uchar4 textures (standard rgba) and return a float4 so presumably the gpu is doing a conversion there. Also, the boxfilter sample has a bunch of dead code in it.