EDIT - poking around in the examples, I discovered that the imageDenoising example uses a single array and just packs/unpacks the float3 into a single unsigned int. That seems workable so I’ll just assume that’s standard and go with it. I’ve also got it working using 3 different 32-bit textures if I want more precision…although I’m storing them in 3 different arrays
Forgive me if this is covered elsewhere. I’ve read the programming guide and didn’t find the answer there, and when I searched here all I found was other people asking the same question with no answers.
I start with an image buffer. Each channel is stored in row-major format. In other words, if v = channel index, then
img[ vwidthheight + width*y + x ] = img(x,y,v)
I can read this as a 1-channel image by setting it up liek this:
Now, I wanted to modify that to work for 3 channels. I have no idea how to describe to CUDA the format of the values…how to tell it I am using RRRR GGG BBB vs RGB RGB RGB, etc. Perhaps that’s my problem. Anyway I used float4 since you’re not allowed to use float3. This is what I did:
That didn’t work. I’m not exactly sure what part didn’t work because I’m just getting started and haven’t mastered debugging, but I just didn’t get any output. I suppose I could work around this by creating 3 one-dimensional textures for each of the Red, green, and blue channels…but is that the preferred way?