Hello there.
I have this situation where I create a texture image and upload it to the GPU, via regular OpenGL calls.
Then I call [font=“Courier New”]cudaGraphicsGLRegisterImage(&cudaRes, myBeautyTexID, GL_TEXTURE_2D, cudaGraphicsMapFlagsNone);[/font]. Obs.: cudaGLSetGLDevice(0); has been properly called, with no errors.
And there my problem begins:
If I’m using (for example) [font=“Courier New”]glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, 64, 64, 0, GL_RGBA, GL_FLOAT, pixels);[/font] (thats 4 channels, floating-point type), it all goes good. I can bind and unbind the CUDA resource without errors.
Now, if I use [font=“Courier New”]glTexImage2D(GL_TEXTURE_2D, 0, 1, 64, 64, 0, GL_LUMINANCE, GL_FLOAT, pixels);[/font] (1 floating-point luminance channel), then CUDA generates an “invalid argument” (error 11) error. The texture still works well with OpenGL.
It’s specified in the CUDA Programming Guide version 3.1.1, section 3.2.8.1 (page 41) (I’m feeling like a lawyer), that “[font=“Courier New”]cudaGraphicsGLRegisterImage()[/font] supports all texture formats with 1, 2 or 4 components and an internal type of float (e.g. GL_RGBA_FLOAT_32) and unnormalized integer (e.g. GL_RGBA8UI). (…)”.
So I’m lost here. Am I missing anything? I’ve tried using GL_RED instead of GL_LUMINANCE on the glTexImage(…) call, but had no positive results.
Why is it it’s not allowing me to register the texure for access via CUDA?
System:
- GeForce 9600 (1.1 CUDA capability);
- CUDA Toolkit, drivers and SDK 3.1 (properly uninstalled previous versions before installing new versions);
- VS2005;
- Windows 7;
Thanks in advance for any help!