Problem with cudaChannelFormatDesc weird values

Hi,

i have a question concerning a channelDescriptor.

I want to map the imageData of a 3 Channel OpenCV IplImage to a cudaArray and do some pixeloperations on it.
Therefore I have to create a channelDesc first I think.

Each Channel of the Image has a 8 bit depth so I thought I could do something like this:

cudaChannelFormatDesc channelDesc =
cudaCreateChannelDesc(8, 8, 8, 0, cudaChannelFormatKindFloat);

Looks alright to me. Compiles just fine.
Now when I set a breakpoint somewhere after this code snippet and take a look at the value of channelDesc I get these weird values for x:

channelDesc
x = 126222400
y = 8
z = 8
w = 8
f = cudaChannelFormatKindSigned (even though i have put in cudaChannelFormatKindFloat)

Can somebody tell me what I’m doing wrong?

Thanks a lot!

In the runtime API, you can only use 32-bit floats. I believe that the 16-bit float formats are made available to the driver API. I’m not aware of any 8-bit floating point format.

Of course, this doesn’t totally explain why the elements in the structure are changing on you.
EDIT: Come to think of it, you can’t have a 3-component texture either. Only 1,2, and 4 component textures are allowed.

ok, thanks for your reply!

can i just use a cudaArray without a texture to map all three channels of an RGB image into the cudaArray?
I want to do the Pixeloperations on each pixelvalue in the three channels at once.

Is that possible and do I need the channelDescriptor for that?

Correct me if I’m wrong, but what you have is an image stored with 8-bits for red, 8-bits for green and 8-bits for blue and you want to read this in the “normal” graphics mode where you get a floating point number from 0-1 for each channel?

If so, then you want the readNormalizedFloat mode for the texture. You will need to pad your data when you load it, so the last 8-bits of the 32-bit number is there (the alpha channel in graphics speak), even if you don’t use it.

Disclaimer: I haven’t used textures in this way yet, so the code is entirely based on what I read in the programming guide for setting it up.

Declare your texture reference like so:

texture<uchar4, 2, cudaReadModeNormalizedFloat> tex_ref

Note: I don’t know if you want filtering, wrapping or normalized coordinates, so I leave the options as default. See section 4.5.2.4 on how to set these things up.

To allocate, set filtering options, and bind the array, all you need is the following.

cudaArray* cuArray;

cudaMallocArray(&cuArray, &tex_ref.channelDesc, width, height);

// set tex_ref.normalized, filterMode and addressMode here to your liking

cudaBindTextureToArray(tex_ref, cuArray);

Obviously, you need to copy the data to the array sometime before you try to read it too :)

I’ll also point out to you that the cudaArray method of texturing is really only needed if your kernels access the image with 2D-local patterns (or if you want all the fancy wrapping/normalized addressing modes). If you have an extremely regular memory access pattern across the rows of the image, a straight coalesced global memory read is the way to go. Or, if you really aren’t taking full advantage of the 2D cache offered by the cudaArray, you could get by with the simpler 1D textures read directly from device memory, which have their advantages too.

Edit: silly code tags didn’t work

Thank you very much Mister Anderson :D

that’s just what I need! Saves me a whole lot of time!