cuGraphicsGLRegisterImage returns CUDA_ERROR_INVALID_VALUE cuGraphicsGLRegisterImage, CUDA 3.0

did anybody use cuGraphicsGLRegisterImage ? it does not work for me.

( cudaGraphicsGLRegisterImage had the same problem )

cuGraphicsGLRegisterImage returns CUDA_ERROR_INVALID_VALUE,

here is the code, what is wrong with it ? :

[codebox]

GLuint tex;

glGenTextures(1,&tex);

glBindTexture(GL_TEXTURE_2D,tex);

glTexImage2D(GL_TEXTURE_2D, 0, 4, dx,dy, 0, GL_RGBA, GL_UNSIGNED_BYTE,0);

glBindTexture(GL_TEXTURE_2D,0);

CUdevice dev = 0;

CUcontext ctx = 0;

CUresult cuerr = CUDA_SUCCESS;

cuerr = cuInit(0);

cuerr = cuDeviceGet(&dev,0);

cuerr = cuGLCtxCreate(&ctx,CU_CTX_BLOCKING_SYNC | CU_CTX_MAP_HOST,dev);

CUgraphicsResource cu_res = 0;

cuerr = cuGraphicsGLRegisterImage(&cu_res,tex,GL_TEXTURE_2D,CU_GRAPHICS_MAP_RESOURCE_FLAGS_

WRITE_DISCARD);

[/codebox]

cuerr is CUDA_ERROR_INVALID_VALUE

thx

I independently received the same error. (CUDA_ERROR_INVALID_VALUE)
CUDA 3.0 Linux 64 bit. 195.22

the system I tried on was 64 bit linux, cuda 3.0 beta, driver 195.17

I tried on windows xp too, got the same error.

I can confirm this bug for both the 195.17 and 195.22 driver on linux x64

In general buffers seem to work as expected but renderbuffers and textures only work for a select set of internal formats (eg. GL_RGBA32F)

I also noticed there’s a cudaGraphicsMapFlagsWriteDiscard flag for registering images. I suppose this is reserved for future use since cuda can’t write to cudaArrays?

N.

I’ve done some more testing on the cudaGraphicsGLRegisterImage issue.

It turns out that:

[codebox]glGenTextures(1,&tex);

glBindTexture(GL_TEXTURE_2D , tex);

glTexImage2D(GL_TEXTURE_2D , 0 , GL_XXXXYYF , width , height , 0 , GL_RGBA , GL_FLOAT , 0);

glTexParameteri(GL_TEXTURE_2D , GL_TEXTURE_MIN_FILTER , GL_NEAREST);

cudaGraphicsGLRegisterImage (&resource , tex , GL_TEXTURE_2D , cudaGraphicsMapFlagsNone);

works for

XXXX = R,RG,RGB or RGBA

YY = 16 or 32[/codebox]

and

[codebox]glGenTextures(1 , &tex);

glBindTexture(GL_TEXTURE_2D , tex);

glTexImage2D(GL_TEXTURE_2D , 0 , GL_XXXXYYZZ , width , height , 0 , GL_RGBA_INTEGER , GL_UNSIGNED_BYTE , 0);

glTexParameteri(GL_TEXTURE_2D , GL_TEXTURE_MIN_FILTER , GL_NEAREST);

cudaGraphicsGLRegisterImage (&resource, tex , GL_TEXTURE_2D , cudaGraphicsMapFlagsNone);

works for

XXXX = R,G,RGB or RGBA

YY = 8,16 or 32

ZZ = I or UI[/codebox]

Notice that it is important to set the minification filter to GL_NEAREST.

In conclusion, it looks like the cudaGraphicsGL interface is working for most formats, excluding normalized internal formats such as the commonly used GL_RGBA8 format.

N.

Today I tried cudaGraphicsGLRegisterImage for texture in GL_RGBA8 formate, and I also got this error.

I am working on WinXP, the driver version is 195.39

Is this a bug of the driver, or of myself?

Does anyone know if nVidia plans to support depth formats?

GL_DEPTH_COMPONENT32 doesn’t return an error, but it also doesn’t return what you would expect…

The array is 4 components of CU_AD_FORMAT_UNSIGNED_INT8) - more puzzling, it’s completely blank (when copying it from OpenGL proves it’s certianly not) - 0x00000000 for each pixel (or rather, 0x00 for each component, for each pixel).

Has anyone managed to get this working yet?

I’d like to bump this again, using latest cuda beta (3.1) there is still no support for the normalized formats. Especially for volume rendering unsigned normalized 16 is very useful (medical data is 12 bits), as half floats offer only 10 bit precision. It is weird that the texture fetches within CUDA support a normalized mode, but not the corresponding OpenGL formats…

this functionality seems to be supported in OpenCL.

Due to differences between OpenGL and CUDA textures (CUDA doesn’t have named formats which specify component order), we currently only support floating point and unnormalized integer texture formats for CUDA/OpenGL texture interop. Try using GL_RGBA8UI_EXT instead of GL_RGBA8.

This should be improved in future releases.