cudaCreateChannelDesc RGB byte mode is not supported?

ERROR IS : “invalid channel descriptor”

cudaChannelFormatDesc channelDesc = cudaCreateChannelDesc(8, 8, 8, 0, cudaChannelFormatKindUnsigned);

cudaArray* cu_array;
cudaError err = cudaMallocArray( &cu_array, &channelDesc, width, height );
if( cudaSuccess != err) {
printf(“Cuda error in file ‘%s’ in line %i : %s.\n”, FILE, LINE, cudaGetErrorString( err) );

I can no find anything about this error message anywhere… how do I suck in the byte-component RGB image into a uchar3- type texture?

Textures of 3 tuples are not supported. Use a 4 tuple and waste a byte. Or use a 2 tuple and a 1 tuple.

THIS WAS MY INITIAL GUESS. THANK YOU. THE SOLUTION: I USED A SINGLE CHANNEL TEXTURE AND THEN WITHIN THE KERNEL USED 3-sequential texture accesses. I think it’s slower than a single 4-byte read…

however, I suspect that re-packing host RGB image into device RGBA texture would take even longer.

From the other hand, if each thread is accessing three sequential texture bytes, does it mean the access is non-coalesced?

No need to shout!

The coalescing restrictions do not apply to texture fetches.

If you’re lucky, you can use memcpy2D to repack. Use width 3, stride 4.

Found this thread from a personal problem…
I don’t think that using 4-tuple for an RGB image is good because the tex2D function will read 3-tuple data as 4-tuple. So indexes will get messy.
If you imply to transform the image and put a zero value to the fourth element, I think its quite memory wasting, specially if you deal with large images.