cudaDecodeGL in the CUDA_Samples directory is designed to work with the freeglut library. I’m trying to convert it to use GLFW. I changed the beginning initializations, but much of the code from the example stays the same. For example, the render step performs the commands:
glClear(GL_COLOR_BUFFER_BIT); // load texture from pbo glBindBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB, gl_pbo_[field_num]); glBindTexture(GL_TEXTURE_TYPE, gl_texid_[field_num]); glTexSubImage2D(GL_TEXTURE_TYPE, 0, 0, 0, nTexWidth_, nTexHeight_, GL_BGRA, GL_UNSIGNED_BYTE, 0); glBindBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB, 0); // fragment program is required to display floating point texture glBindProgramARB(GL_FRAGMENT_PROGRAM_ARB, gl_shader_); glEnable(GL_FRAGMENT_PROGRAM_ARB); glDisable(GL_DEPTH_TEST); float fTexWidth = (float)nWidth_ / (float)nTexWidth_; float fTexHeight = (float)nHeight_ / (float)nTexHeight_; glBegin(GL_QUADS); glTexCoord2f(0, (GLfloat)nTexHeight_); glVertex2f(0, 0); glTexCoord2f((GLfloat)nTexWidth_, (GLfloat)nTexHeight_); glVertex2f(1, 0); glTexCoord2f((GLfloat)nTexWidth_, 0); glVertex2f(1, 1); glTexCoord2f(0, 0); glVertex2f(0, 1); glEnd(); glBindTexture(GL_TEXTURE_TYPE, 0); glDisable(GL_FRAGMENT_PROGRAM_ARB);
After calling this render command, the original freeglut code calls the
command and a video frame is rendered to the output window. So in my GLFW-version, I retain the above render steps (as well as the base frame decoding logic in the sample code). I have confirmed that a frame is getting read in and decoded and even mapped to the interop buffer. But when I call what I think are the equivalent GLFW buffer swap commands:
no video frame shows up.
Has anyone successfully implemented a GLFW version of the cudaDecodeGL example code? I’m uncertain what I’m doing wrong and am desperately seeking an example showing how to get this working.