I am currently working on some code to visualize images i am processing using CUDA 10 and Visionworks NVXCU API. I have gotten visualization of 24-bit RGB, unsigned 8-bit grayscale and unsigned 16-bit grayscale images to work fine. I also wish to visualize signed 16-bit images but i am encountering a problem with the cudaGraphicsGLRegisterImage function.
The relevant code is as follows:
glGenTextures(1, &m_imageTexture);
glBindTexture(GL_TEXTURE_2D, m_imageTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
float borderColor[] = { 1.0f, 1.0f, 0.0f, 1.0f };
glTexParameterfv(GL_TEXTURE_2D, GL_TEXTURE_BORDER_COLOR, borderColor);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R16I, width, height, 0, GL_RED, GL_SHORT, nullptr); glCheckError();
m_shaderPtr = std::make_unique<Shader>("shaders/image.vs", "shaders/grayS16_image.fs");
CUDA_SAFE_CALL( cudaGraphicsGLRegisterImage(&m_graphicsResource, m_imageTexture, GL_TEXTURE_2D, cudaGraphicsRegisterFlagsWriteDiscard | cudaGraphicsRegisterFlagsSurfaceLoadStore) );
The problem occurs on the last line, where it return error: 11 - invalid argument. The same code works fine if i replace GL_R16I with GL_R16.
I would have preferred to use a GL_R16_SNORM texture, but as cudaGraphicsGLRegisterImage does not support it, i was planning to use a GL_R16I texture and do the necessary conversion myself in the fragment shader. But i cant get a GL_R16I texture to work, or any other integer texture for that matter, even though the CUDA documentation says it should be supported.
Do anyone have a suggestion as to how i can fix this error?