Write bytes to OpenGL 2DTexture

Hi,

I’m using a CUDA kernel to output some data to a 2D OpenGL Texture. (RGBA, unsigned byte format, 8 bit per color channel, 32 bit per pixel).

Im currently using the surface API and I write the texture via surf2DWrite within the kernel.
This works fine, however, the problem is that the code gets very messy because I’m writing raw data (floats, ints, shorts etc.) instead of pixels.

I also need to write various pixels per iteration, and because surf2Dwrite requries an x and y value I need to manually ensure that I jump into the next row of the texture once the X value exceeds its width.

So for example, in order to write two unsigned shorts, I need to do something like:

uchar4 pixelData;

//write first uint16_t
uint16_t data1 = 123;
pixelData.x = (data1 >> 8) & 0xff;
pixelData.y = data1 & 0xff;
//write second uint16_T
uint16_t data2 = 456;
pixelData.z = (data2 >> 8) & 0xff;
pixelData.w = data2 & 0xff;
//pixel is full, write to texture
surf2Dwrite(pixelData, x* sizeof(uchar4),y);
//select next pixel to write, take X overflow into account, write more data etc...
++x;
if(x>=width)
{
++y;
x = x - width;
}
//....

Instead of dealing with all of that, I would like to treat the texture as a raw byte array and write the bytes directly by index. Is there any way in CUDA to do that? I’ve seen many examples about openGL textures, but they all use surf2Dwrite with float4 or uchar4 data.

I tried using surf1Dwrite with the X value as byte index, however, that didnt seem to do anything and my texture stays black.