Binding an OptixTexture to OpenGL

i’m currently trying to create some kind of Raytracing with OptiX and OpenGL.

My problem right now is, that i’m trying to bind the texture output from Optix to my OpenGL sphere. After doing some research about binding textures to OpenGL i only found ways to bind an image to the texture.

Although i tried to implement it in a similar way, but i doesn’t work. Any idea which way is the right to do it?

Here my code:


//global varibles as ids
GLUint optix_tex;
GLuint vbo = 0;
        //creating a buffer for optix and binding it to openGL buffer
       optix::Buffer CreateOutputOptix(RTformat format, unsigned int width, unsigned int height)
                optix::Buffer buffer;

                glGenBuffers(1, &vbo);
                glBindBuffer(GL_ARRAY_BUFFER, vbo);
                size_t element_size;
                optix_context->checkError(rtuGetSizeForRTformat(format, &element_size));
                glBufferData(GL_ARRAY_BUFFER, element_size * width * height, 0, GL_STREAM_DRAW);
                glBindBuffer(GL_ARRAY_BUFFER, 0);

                buffer = optix_context->createBufferFromGLBO(RT_BUFFER_OUTPUT, vbo);
                buffer->setSize(width, height);

                return buffer;

        optix::Buffer renderInitializeOptix()
        //initializing the optix context
            //set up geometry

            // Create an output texture for OpenGL
            glGenTextures(1, &optix_tex);
            glBindTexture(GL_TEXTURE_2D, optix_tex);
            glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, w, h, 0, GL_BGR, GL_UNSIGNED_BYTE,NULL);
            glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);      // Change these to GL_LINEAR for super- or sub-sampling
            glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
            glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);    // GL_CLAMP_TO_EDGE for linear filtering, not relevant for nearest.
            glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
            //  glBindTexture(GL_TEXTURE_2D, 0);

    //function called in the mainloop
    void renderOptix()
        //updating optix context
        // Transfer output to OpenGL texture
        glBindTexture(GL_TEXTURE_2D, optix_tex);

    //openGL specific code to bind,probably not necessary again
    void renderSetupOptixGL()
        glBindTexture(GL_TEXTURE_2D, optix_tex);


//creating sphere
void OpenGLRenderer::RenderSceneGl()
    glDisable( GL_LIGHTING );

    GLUquadricObj *sphere = NULL;
    sphere = gluNewQuadric();
    gluQuadricDrawStyle(sphere, GLU_FILL);
    gluQuadricTexture(sphere, GLU_TRUE);
    gluQuadricNormals(sphere, GLU_SMOOTH);
    //Making a display list
    int mysphereID = glGenLists(1);
    glNewList(mysphereID, GL_COMPILE_AND_EXECUTE);
    gluSphere(sphere, 1.0, 20, 20);

bool OpenGLRenderer::Do()


if you need further information, just ask Thank you

First, I would recommend to neither use GLU nor OpenGL display lists for any current OpenGL development. Those are legacy concepts.
Your current implementation is going to create and evaluate all attributes of the quadric and compile and execute an OpenGL display list for each call to RenderSceneGl() which is a complete waste of time and leaking the display list.
Just build some own sphere geometry once. It’s not difficult.

The way to think about rendering to an OpenGL texture with OptiX is like this: You cannot render to a texture itself in OptiX! You can only render into an OpenGL-OptiX (CUDA) interop buffer which is then uploaded to the OpenGL texture. Since that happens all in device memory, this transfer is going to be fast.
Means you need to generate an OpenGL Pixel-Bufferobject (PBO), then set that to some non-zero size, create an OpenGL-OptiX interop buffer with matching output type from that, bind that to some buffer variable you write to inside the OptiX device code, launch the OptiX kernel, after it has finished, bind the OpenGL PBO as PixelUnpack buffer, and do a glTexImage2D() (or glTexSubImage2D() if the texture size has been set before), with the proper format and null offset to upload the data you generated with OptiX to the OpenGL texture, then use that OpenGL texture object to render with in your OpenGL code.

To do that you already found the relevant code, but you didn’t actually upload any texture data!
I would also not recommend to use RGB data. There are no three-component textures in the hardware.
When using BGRA and unsigned byte as user format and type you should hit the fastest texture upload path.

Below code are the excerpts from an application which does all that host side setup in the necessary order from top to bottom.
For completeness it also contains the code for the case where no OpenGL-OptiX interop buffer is used in the else-clauses of the m_interop condition.
Note that there is some more code needed to make that compile.

The code in Application::display() will render a textured quad to the full viewport with that setup.
That’s what you would normally do to present your rendered OptiX image via a full-viewport texture blit.
Now if you render something else with OpenGL using texture coordinates there, the active texture will be mapped to that, for example your sphere.

class Application
  bool   m_interop
  GLuint m_pbo;
  GLuint m_tex;

  optix::Context m_context;

  optix:Buffer m_buffer;
  // Viewport size
  int m_width;
  int m_height;
  // OptiX launch size
  unsigned int m_widthLaunch;
  unsigned int m_heightLaunch;

void Application::initOpenGL()
  glViewport(0, 0, m_width, m_height);



  if (m_interop)
    glGenBuffers(1, &m_pbo);
    DP_ASSERT(m_pbo != 0); 
    // Buffer size must be > 0 or OptiX can't create a buffer from it.
    glBindBuffer(GL_PIXEL_UNPACK_BUFFER, m_pbo);
    glBufferData(GL_PIXEL_UNPACK_BUFFER, m_widthLaunch * m_heightLaunch * sizeof(unsigned char) * 4, nullptr, GL_STREAM_READ); // BRGA8
    glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);

  // glPixelStorei(GL_UNPACK_ALIGNMENT, 4); // default, works for BGRA8, RGBA16F, and RGBA32F.

  glGenTextures(1, &m_tex);
  DP_ASSERT(m_tex != 0);
  glBindTexture(GL_TEXTURE_2D, m_tex);
  glBindTexture(GL_TEXTURE_2D, 0);


void Application::initOptiX() 
    // OptiX buffer initialization:
    m_buffer = (m_interop) ? m_context->createBufferFromGLBO(RT_BUFFER_OUTPUT, m_pbo)
                           : m_context->createBuffer(RT_BUFFER_OUTPUT);
    m_buffer->setFormat(RT_FORMAT_UNSIGNED_BYTE4); // BGRA8
    m_buffer->setSize(m_widthLaunch, m_heightLaunch);


bool Application::render()
    // Render to the buffer in OptiX
    m_context->launch(0, m_widthLaunch, m_heightLaunch);

    // This is what was missing or was done at the wrong place in your code!
    // Update the OpenGL texture with the results:
    if (m_interop) 
      glBindBuffer(GL_PIXEL_UNPACK_BUFFER, m_buffer->getGLBOId());
      glBindTexture(GL_TEXTURE_2D, m_tex);
      glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, (GLsizei) m_widthLaunch, (GLsizei) m_heightLaunch, 0, GL_BGRA, GL_UNSIGNED_BYTE, nullptr); // BGRA8
      glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
      void const* data = m_buffer->map();
      glBindTexture(GL_TEXTURE_2D, m_tex);
      glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, (GLsizei) m_widthLaunch, (GLsizei) m_heightLaunch, 0, GL_BGRA, GL_UNSIGNED_BYTE, data); // BGRA8

void Application::display()
  glBindTexture(GL_TEXTURE_2D, m_tex);

    glTexCoord2f(0.0f, 0.0f); // Texture coordinates.
    glVertex2f(-1.0f, -1.0f);
    glTexCoord2f(1.0f, 0.0f);
    glVertex2f(1.0f, -1.0f);
    glTexCoord2f(1.0f, 1.0f);
    glVertex2f(1.0f, 1.0f);
    glTexCoord2f(0.0f, 1.0f);
    glVertex2f(-1.0f, 1.0f);

  glBindTexture(GL_TEXTURE_2D, 0);
1 Like

Ok thanks a lot.

I tried your suggestion and finally something happens when i call renderfunction within the displayloop.
And i also figured out how to map it onto a sphere.

But now my problem is, that I’m still not sure how to map the exact data i want to be mapped.
If i understood your suggestion the right way, i have to put it in here:

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, (GLsizei) m_widthLaunch, (GLsizei) m_heightLaunch, 0, GL_BGRA, GL_UNSIGNED_BYTE, nullptr//specific data in here?);

So I need to get a texture with a glass surface on my sphere.
How do i get the data i need from my glass surface or the object with the glass surface?

My approach was to create an own sphere with own bound and intersection programm in optix. Then i created a material for that sphere and instanced the geometry object like in the optix tutorial.
And now my problem is, if this could be the right way, how to get my data from this sphere?

Thanks for your patience and help.

For the glTexImage2D() the last argument “data” can have two different meanings.

When there is no PBO bound then data is just a pointer to host memory with the texels’ image data in the size and format you described with the other arguments. That’s the code path with m_interop == false above.

But when there is a pixel unpack buffer bound while calling glTexImage2D(), the data argument will by a byte offset into that PBO, not actually a pointer.
I probably should have used “0” instead of nullptr there. It’s just the beginning of the PBO data in either case. That’s the m_interop == true code path above.

Now in either case with the OptiX setup and some device code which outputs data into “sysBuffer”, that is going to be your texel data. The interop case will just be a lot faster because the data stays in device memory (VRAM).
It’s not different than what most OptiX SDK examples do to display the rendered image.

I’m not sure what you mean with that.
“Glass” as a surface material is not what you can simply put into a 2D texture image and map onto some object in a rasterizer and expect to look like glass. That will just look as if a decal of a photo with a glass sphere was glued onto a sphere. That’s something for an impostor image on a billboard.

Transparency in a rasterizer is much more complicated!

Maybe explain in detail what rendering result you’re trying to achieve in the end.
E.g. what information are you planning to encode into that texture image with the ray tracer to be used by the rasterizer?

Ok, thank you.

My exact task is to create an application similar to the “tutorial” in the SDK, where you have some objects with a glass surface, but in an own framework, which uses OpenGL to render objects and runs in an OpenGL display loop.

Since I’m very new to OptiX i thought using OpenGL in combination with OptiX was something special. Therefore i tried to find something on google, which is a combination of both.
I found an example from Nvidia called nvpro-samples/gl_optix_composite/OpenGL + OptiX Compositing sample on git. This is a application where a shader is rendered with a combination of OpenGL and OptiX.

After looking through the code, i thought it might fit my purpose with some changes to make it usable in my framework. Also there was an function to add an material via OptiX

I also got some the code i posted above from there.

But now after your post i’m starting to doubt if this was the right way to do it.

So in the end i want to render some objects with a glass surface using OptiX in an own frame where the rendering is based on OpenGL.

1 Like

Yes, that compositing example is mixing rasterization with reflections rendered in OptiX which results in accurate self-reflections, where it would be extremely difficult for a rasterizer to get that right.
I would argue that you could simply ray trace the whole thing in the same time. It’s more of a proof of concept of hybrid rendering methods than an actually recommended way to do this.

Transparency is a whole different level of complexity for a rasterizer, while a ray tracer naturally solves the rendering order problems. Look for “Order Independent Transparency” (OIT) topics to get a glimpse of what a rasterizer would have to do to get the rendering order right, and then refraction and absorption are still not handled.

Do you mean you need to integrate your work into an already existing OpenGL framework?
If not, the OptiX SDK examples and the OptiX advanced examples on github are already OpenGL applications (using freeglut resp. GLFW as frameworks), just that they are only using OpenGL for the display of the image rendered with OptiX and the GUI, but you could render anything else in these frameworks with OpenGL as well. I would recommend using the advanced samples’ GLFW and IMGUI framework for own experiments.

Ok, i’ll take a look on OIT.

Yes, it has to be integrated into an already existing framework. And it should use raytracing and not rasterization. So in which way can i use something similar to the tutorial but using raytracing?

Not sure what you’re asking for now. The OptiX tutorial example is already doing all that just using GLUT as OpenGL framework.

If your task is to integrate OptiX ray tracing into an existing OpenGL framework then just merge the ray tracing code you need from some OptiX example into your application and display the resulting image similar to the code I posted above inside your OpenGL based framework. That’s what all OptiX examples do.

ok thanks i think i got i now.