OpenGL interoperability issue with 1 channel float texture.

Hello there.

I have this situation where I create a texture image and upload it to the GPU, via regular OpenGL calls.

Then I call [font=“Courier New”]cudaGraphicsGLRegisterImage(&cudaRes, myBeautyTexID, GL_TEXTURE_2D, cudaGraphicsMapFlagsNone);[/font]. Obs.: cudaGLSetGLDevice(0); has been properly called, with no errors.

And there my problem begins:
If I’m using (for example) [font=“Courier New”]glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, 64, 64, 0, GL_RGBA, GL_FLOAT, pixels);[/font] (thats 4 channels, floating-point type), it all goes good. I can bind and unbind the CUDA resource without errors.

Now, if I use [font=“Courier New”]glTexImage2D(GL_TEXTURE_2D, 0, 1, 64, 64, 0, GL_LUMINANCE, GL_FLOAT, pixels);[/font] (1 floating-point luminance channel), then CUDA generates an “invalid argument” (error 11) error. The texture still works well with OpenGL.

It’s specified in the CUDA Programming Guide version 3.1.1, section 3.2.8.1 (page 41) (I’m feeling like a lawyer), that “[font=“Courier New”]cudaGraphicsGLRegisterImage()[/font] supports all texture formats with 1, 2 or 4 components and an internal type of float (e.g. GL_RGBA_FLOAT_32) and unnormalized integer (e.g. GL_RGBA8UI). (…)”.

So I’m lost here. Am I missing anything? I’ve tried using GL_RED instead of GL_LUMINANCE on the glTexImage(…) call, but had no positive results.
Why is it it’s not allowing me to register the texure for access via CUDA?

System:

  • GeForce 9600 (1.1 CUDA capability);
  • CUDA Toolkit, drivers and SDK 3.1 (properly uninstalled previous versions before installing new versions);
  • VS2005;
  • Windows 7;

Thanks in advance for any help!

I succeeded (in PyCUDA on Linux, but hope this will be useful for you) in creating and using textures.

I start with creating buffer, then use this buffer as texture, and then mapping buffer as CUDA object:

[codebox]self.gl_zero, self.gl_one = glGenBuffers(2)

self.textureId0, self.textureId1 = glGenTextures(2)

data = numpy.ones((256, 256, 256), ‘f’)

glBindBuffer(GL_TEXTURE_BUFFER, self.gl_zero)

glBufferData(GL_TEXTURE_BUFFER, data, GL_DYNAMIC_DRAW)

glBindTexture(GL_TEXTURE_3D, self.textureId0)

glTexBuffer(GL_TEXTURE_BUFFER, OpenGL.raw.GL.ARB.texture_rg.GL_R32F, self.gl_zero)

self.cuda_zero = pycuda.gl.BufferObject(long(self.gl_zero))

[/codebox]

Basically BufferObject calls [font=“Courier New”]cuGLRegisterBufferObject[/font]

From my experience order of creating buffers, registering objects, attaching them to CUDA, etc. matters.

Depending on how you use texture (in shaders, old-style OpenGL) you need or need not to use TexParameters and similar functions.

Hope it helps.

I succeeded (in PyCUDA on Linux, but hope this will be useful for you) in creating and using textures.

I start with creating buffer, then use this buffer as texture, and then mapping buffer as CUDA object:

[codebox]self.gl_zero, self.gl_one = glGenBuffers(2)

self.textureId0, self.textureId1 = glGenTextures(2)

data = numpy.ones((256, 256, 256), ‘f’)

glBindBuffer(GL_TEXTURE_BUFFER, self.gl_zero)

glBufferData(GL_TEXTURE_BUFFER, data, GL_DYNAMIC_DRAW)

glBindTexture(GL_TEXTURE_3D, self.textureId0)

glTexBuffer(GL_TEXTURE_BUFFER, OpenGL.raw.GL.ARB.texture_rg.GL_R32F, self.gl_zero)

self.cuda_zero = pycuda.gl.BufferObject(long(self.gl_zero))

[/codebox]

Basically BufferObject calls [font=“Courier New”]cuGLRegisterBufferObject[/font]

From my experience order of creating buffers, registering objects, attaching them to CUDA, etc. matters.

Depending on how you use texture (in shaders, old-style OpenGL) you need or need not to use TexParameters and similar functions.

Hope it helps.