Error of operating 2D image created from OpenGL texture?

Hello,

I am trying to operate OpenCL 2d image object which is created from OpenGL texture. The image object can be created and acquired successfully. However, it always fails when invoke some operation command . The codes are as follow.

glGenTextures(1,&textureName);

	glBindTexture(GL_TEXTURE_2D,textureName);

	glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_LINEAR );

	glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );

	glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA8UI,srcImage.Width(),srcImage.Height(),0,GL_RGBA_INTEGER,GL_UNSIGNED_BYTE,NULL);

	cl::Image2DGL desImageObj(context,CL_MEM_READ_WRITE,GL_TEXTURE_2D,0, textureName ,&errCode);

	if(errCode != CL_SUCCESS){

		std::cout<<"OpenCL OpenGL shared image object can not be created, error code:"<<errCode<<std::endl;

	}

	std::vector<cl::Memory> sharedObjs;

	sharedObjs.push_back(desImageObj);

	if(queues[0].enqueueAcquireGLObjects(&sharedObjs) != CL_SUCCESS){

		std::cout<<"Objects can not be acquired "<<std::endl;

	}

...

	//origin={0,0,0} region={image width, image height, 1}, rgbaData is RGBA image data

	errCode=queues[0].enqueueWriteImage(desImageObj,CL_TRUE,origin,region,0,0,rgbaData);

	if(errCode != CL_SUCCESS){

		 std::cout<<"Writing data into image object failed! "<<errCode<<std::endl;

	}

When calling “enqueueWriteImage”, it returned error code “CL_INVALID_VALUE” . I have check all the parameters and can not find the invalid values.

Can someone tell me where is the problem?

My development environment: OpenSUSE 11.2 + GTX260 + Nvidia driver 258.19(OpenCL 1.1 pre-release)

Is it possible the bug of driver ?

I have the same problem… have you managed to solve it?

I have the same problem… have you managed to solve it?

I have the same problem… have you managed to solve it?

I must confess I haven’t used OpenGL and OpenCL together, but I would expect a call to clCreateFromGLTexture2D before clEnqueueWriteImage.

The line
cl::Image2DGL desImageObj(context,CL_MEM_READ_WRITE,GL_TEXTURE_2D,0, textureName ,&errCode);
looks weird, is it declaretions? I would expect something like
cl::Image2DGL* desImageObj = new Image2DGL(context,CL_MEM_READ_WRITE,GL_TEXTURE_2D,0, textureName ,&errCode);

However it is only my guess, not my experience.

I must confess I haven’t used OpenGL and OpenCL together, but I would expect a call to clCreateFromGLTexture2D before clEnqueueWriteImage.

The line
cl::Image2DGL desImageObj(context,CL_MEM_READ_WRITE,GL_TEXTURE_2D,0, textureName ,&errCode);
looks weird, is it declaretions? I would expect something like
cl::Image2DGL* desImageObj = new Image2DGL(context,CL_MEM_READ_WRITE,GL_TEXTURE_2D,0, textureName ,&errCode);

However it is only my guess, not my experience.

I must confess I haven’t used OpenGL and OpenCL together, but I would expect a call to clCreateFromGLTexture2D before clEnqueueWriteImage.

The line
cl::Image2DGL desImageObj(context,CL_MEM_READ_WRITE,GL_TEXTURE_2D,0, textureName ,&errCode);
looks weird, is it declaretions? I would expect something like
cl::Image2DGL* desImageObj = new Image2DGL(context,CL_MEM_READ_WRITE,GL_TEXTURE_2D,0, textureName ,&errCode);

However it is only my guess, not my experience.

Thanks for all your replies.

I have tried different texture format settings and still couldn’t solve it. Besides, there is no sample using image object created from texture in Nvidia GPU Computing SDK. Some samples just use buffer object created from OpenGL buffer. I wonder that current Nvidia drivers still don’t support it.

cl::Image2DGL desImageObj(context,CL_MEM_READ_WRITE,GL_TEXTURE_2D,0, textureName ,&errCode); should be ok. It invokes clCreateFromGLTexture2D in constructor which is same as cl::Image2DGL* desImageObj = new Image2DGL(context,CL_MEM_READ_WRITE,GL_TEXTURE_2D,0, textureName ,&errCode);

After the first try I find that OpenCL is still not supported well by vendors. Intel will not release driver for CPU until the end of this year. And Nvidia put much more attention on their own CUDA and related APIs than OpenCL. I hope that more efforts could be offered to help developers.

Thanks for all your replies.

I have tried different texture format settings and still couldn’t solve it. Besides, there is no sample using image object created from texture in Nvidia GPU Computing SDK. Some samples just use buffer object created from OpenGL buffer. I wonder that current Nvidia drivers still don’t support it.

cl::Image2DGL desImageObj(context,CL_MEM_READ_WRITE,GL_TEXTURE_2D,0, textureName ,&errCode); should be ok. It invokes clCreateFromGLTexture2D in constructor which is same as cl::Image2DGL* desImageObj = new Image2DGL(context,CL_MEM_READ_WRITE,GL_TEXTURE_2D,0, textureName ,&errCode);

After the first try I find that OpenCL is still not supported well by vendors. Intel will not release driver for CPU until the end of this year. And Nvidia put much more attention on their own CUDA and related APIs than OpenCL. I hope that more efforts could be offered to help developers.

Thanks for all your replies.

I have tried different texture format settings and still couldn’t solve it. Besides, there is no sample using image object created from texture in Nvidia GPU Computing SDK. Some samples just use buffer object created from OpenGL buffer. I wonder that current Nvidia drivers still don’t support it.

cl::Image2DGL desImageObj(context,CL_MEM_READ_WRITE,GL_TEXTURE_2D,0, textureName ,&errCode); should be ok. It invokes clCreateFromGLTexture2D in constructor which is same as cl::Image2DGL* desImageObj = new Image2DGL(context,CL_MEM_READ_WRITE,GL_TEXTURE_2D,0, textureName ,&errCode);

After the first try I find that OpenCL is still not supported well by vendors. Intel will not release driver for CPU until the end of this year. And Nvidia put much more attention on their own CUDA and related APIs than OpenCL. I hope that more efforts could be offered to help developers.

Seems you are right, that NVidia driver doesn’t support this feature yet.

gameover: It appears that clEnqueueWriteImage is only implemented for clCreateImage2D not for clCreateFromGLTexture2D when using the Nvidia SDK;
http://www.openframeworks.cc/forum/viewtop…?p=22029#p22029

[url=“http://www.khronos.org/message_boards/viewtopic.php?f=37&t=3030”]http://www.khronos.org/message_boards/view...f=37&t=3030[/url]

Seems you are right, that NVidia driver doesn’t support this feature yet.

gameover: It appears that clEnqueueWriteImage is only implemented for clCreateImage2D not for clCreateFromGLTexture2D when using the Nvidia SDK;
http://www.openframeworks.cc/forum/viewtop…?p=22029#p22029

[url=“http://www.khronos.org/message_boards/viewtopic.php?f=37&t=3030”]http://www.khronos.org/message_boards/view...f=37&t=3030[/url]

Seems you are right, that NVidia driver doesn’t support this feature yet.

gameover: It appears that clEnqueueWriteImage is only implemented for clCreateImage2D not for clCreateFromGLTexture2D when using the Nvidia SDK;
http://www.openframeworks.cc/forum/viewtop…?p=22029#p22029

[url=“http://www.khronos.org/message_boards/viewtopic.php?f=37&t=3030”]http://www.khronos.org/message_boards/view...f=37&t=3030[/url]

OK, there is no way but waiting for the implementation in the future. After all these are the optional functions and Nvidia has right to ignore them.

However, they are very useful for graphics development and I hope to see the implementation.

OK, there is no way but waiting for the implementation in the future. After all these are the optional functions and Nvidia has right to ignore them.

However, they are very useful for graphics development and I hope to see the implementation.

OK, there is no way but waiting for the implementation in the future. After all these are the optional functions and Nvidia has right to ignore them.

However, they are very useful for graphics development and I hope to see the implementation.