out-of-memory during frame capture (glFlushMappedBufferRange)


When I try to capture a frame the debugger generates an OOM exception.

terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc

My (open-source) application is 32 bits so address space is limited but 2GB is still available to dump the data.

I notice the crash always appears when glFlushMappedBufferRange is called. It is working if I use the old/slow glBufferSubData function to upload vertex data. So the high memory consumption is related to glFlushMappedBufferRange.

My application uses persistent and big buffers (around 8MB) for vertex streaming.

I suspect that the debugger copy the full buffer instead of a couple of useful byte or there is potentially a memory leak.

Hi gregory38,

Any chance we can have your application and some investigation?


Hello An,

Please find a tarball with my application.


Basically you just need to execute the run_me.sh file. It will render 1.000.000 times a single scene.

I’m on Debian Jessie but library version must be nearly identical to ubuntu 14.4. It requires:

  • GTK 2
  • PNG 1.2
  • LZMA 5.1
  • libx11-6: 1.6.2
  • libc6: 2.9

(hum I don’t know if I can statically link the library)

Test was done on Nvidia 352.21

So far my investigation:

  • Just replacing memcpy/glFlushMappedBufferRange by glBufferSubData avoid the issue
  • Frame with few draw call seems to work.
  • If I reduce the size of my VBO/IBO from 4MB to 256KB (and removes glFlushMappedBufferRange for texture upload), the ram usage skyrockets to 3.2GB.

By the way I didn’t test it but I think this piece of code might be enough to reproduce the issue. It is basically the rendering code that I use to upload vertex data.

// Setup of the the vbo buffer
GLenum m_target = GL_ARRAY_BUFFER
GLbitfield common_flags = GL_MAP_WRITE_BIT | GL_MAP_PERSISTENT_BIT;
GLbitfield map_flags = common_flags | GL_MAP_FLUSH_EXPLICIT_BIT;
GLbitfield create_flags = common_flags | GL_CLIENT_STORAGE_BIT;
size_t size = 16 * 1024 * 1024; // Play with the size to see the memory impact.

glBufferStorage(m_target, size, NULL, create_flags );
uint8* m_buffer_ptr = (uint8*) glMapBufferRange(m_target, 0, size, map_flags);

// Rendering loop
while (1) {
  // Upload the vbo by small chunk
  for (int offset = 0; offset < size; offset += 4) {
     int dummy = rand();
     size_t length = 4;
     memcpy(m_buffer_ptr, &dummy, length);
     glFlushMappedBufferRange(m_target, offset, length);
     glDrawArrays(GL_POINT, offset, 1); // Basic draw to be sure previous buffer is flushed

  Vsync(); // End of frame

Hi gregory38,

Sorry for the late, I just modify some tiny project and use your codes to repro the out-of-memory issue. Yes, I do see huge memory usage when do pause&capture.

We will do some investigation and let you know any news ASAP.


Thanks for the status. I hope the situation can be improved.

Hi gregory38,

We found that the loop number is so big that will make LGD consume too much memory. Although each time you just flush 4 bytes, but LGD will track and save additional information which more than 4 bytes. If it times 16M, it’s a big value.

We just confirmed that your sample codes will works fine with LGD when the loop come down. It’s better to optimize your codes, since 16M drawcalls is a big hot spot.


Hello AYan,

What parameter impact the size of the overhead? Does it depend on the size of the buffer? Is it a constant value? Does it depend on the flushed data size?

The above test was just to highlight the issue.

My application is between 1K-10K draw calls. However I have severals (big) persistant buffers. So I have 2-3 flushes by draw calls.

  • 1 as VBO for vertices (8MB)
  • 1 as IBO for indexes (8MB)
  • 8 as PBO for texture transfer (8 * 4MB).


Hi gregory38,

For the latest Linux Graphics Debug, it should related with the flushed data size.


Hello An,

By latest, do you mean the 1.0 released severals months ago. Or the current unreleased development branch?


Hi gregory38,

I mean the unreleased branch.


Hello An,

Ok. I will wait patiently the next release.

Thanks you for the support.