CLGL texture interop trouble, read_imageui performs strange

Hello

I want OpenCL to use OpenGL Texture with interop.
However, when I use read_imageui , the value returns a very big number

for example, if R value in OpenGL Texture is 1, the value returned by read_imageui is 998277249.

Details:

I create a cube map in OpenGL which goes well.
glGenTextures(1, &this->gl_evnTextureHandle);
glBindTexture(GL_TEXTURE_CUBE_MAP, this->gl_evnTextureHandle);
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X, 0, GL_RGBA8,
cubeLenth, cubeLenth, 0, GL_RGBA, GL_UNSIGNED_BYTE, this->m_cubeTex[0]);

// other 5 face …

//than I share the texture with opencl

m_cubeTexCL[0] =
clCreateFromGLTexture2D(this->m_GPUComputer->getContext(),CL_MEM_READ_ONLY,GL_TEXTURE_CUBE_MAP_POSITIVE_X,0,this->gl_evnTextureHandle,&erro);
oclCheckError(erro, CL_SUCCESS);

//with sampler
this->m_pixelSampler = clCreateSampler(m_cxGPUContex, CL_FALSE, CL_ADDRESS_CLAMP_TO_EDGE, CL_FILTER_NEAREST, &m_ciErrNum);
shrCheckError(m_ciErrNum,CL_SUCCESS);

// than run kernel
this->m_ciErrNum = clEnqueueAcquireGLObjects(this->m_cqCommandQueue,6,this->m_cubeMapCL,0, NULL, NULL);
oclCheckError(m_ciErrNum, CL_SUCCESS);

//invoke clEnqueueNDRangeKernel to run kernel this->m_cubeMapCL[0] for example

this->m_ciErrNum = clEnqueueReleaseGLObjects(this->m_cqCommandQueue,6,this->m_cubeMapCL,0, NULL, NULL);
oclCheckError(m_ciErrNum, CL_SUCCESS);

// the cl code is :

// read from 2D texture, out is defined as “__global uint *”
uint4 color = read_imageui(mapFace, simpleSampler, coordinate);
out[tid] = color.x;

I find:

if R in OpenGL Texture is 0 , color.x also returns 0
if R in OpenGL Texture is 256 , color.x also returns 0
if R in OpenGL Texture is 127 , color.x returns 1056898815
if R in OpenGL Texture is 1, color.x returns 998277249
if R in OpenGL Texture is 255, color.x returns 1065353216

It seems something wrong with the format, but I don’t now where it is.

thanks for your attention and help

In addition

if I create OpenCL Image directly use CPU data , like this->m_cubeTex[0];

// sampler is

cl_image_format simple_format;
simple_format.image_channel_order = CL_RGBA;
simple_format.image_channel_data_type =  CL_UNSIGNED_INT8;

then, everything goes well.

thanks again

OpenCL spec 1.1 section 9.21.3.1:

From a GL of internal type:
GL_RGBA8
corresponds to an OpenCL type of:
CL_RGBA, CL_UNORM_INT8 or
CL_BGRA, CL_UNORM_INT8

and

read_imagef returns floating-point values in the range [0.0… 1.0] for image objects created with image_channel_data_type set to one of the predefined packed formats or CL_UNORM_INT8 or CL_UNORM_INT16.

could be a factor?

sorry for the delay.

I always reference OpenCL 1.0.29, because according to NVIDIA OpenCL Driver Info, only OpenCL 1.0 support.

it seems no such GL-CL corresponding table in OpenCL 1.0.29

so , it might be the case , I will try

thanks

thanks to fungja123

I use read_imagef instead, as GL_RGBA8 corresponding to CL_UNORM_INT8.

it is done