vsync cannot be disabled in Linux

Hello,

We have a cross platform application (Windows/MacOsX/Linux) that displays openGl images, and streams them to a video file. For this to go fast, we disable the vertical synchronization.
This works fine for all graphic cards and platforms, except on Linux (e.g. Ubuntu) with Nvidia cards (e.g. Nvidia NVS 5200 M).

Under Linux, we use following code to disable vsync:

typedef int (APIENTRY * AAAAGLSWAPINTERVALEXTPROC)(int);
AAAAGLSWAPINTERVALEXTPROC wglSwapIntervalEXT=(AAAAGLSWAPINTERVALEXTPROC)glXGetProcAddress((const GLubyte*)"glXSwapIntervalSGI");
if (wglSwapIntervalEXT)
wglSwapIntervalEXT(0);

The function address wglSwapIntervalEXT can successfully be retrieved, but during rendering, following function takes very long, even with an almost empty scene:

swapBuffers();

What is going on?
Thanks for any insight

Do you have double buffer disabled?

I studied latency in a pygame app recently, and found out that this program spends approximately 16.65 milliseconds rendering a frame if double buffering is enabled. Otherwise it’ll take just 30 microseconds per frame:

import pygame, time, sys
from pygame.locals import *
from OpenGL.GL import *
resolution = 1920, 1200
flags = HWSURFACE | OPENGL
if len(sys.argv) <= 1:
    flags |= DOUBLEBUF
pygame.display.set_mode(resolution, flags)
done = False
latencies = []
while not done:
    frame_begin = now = time.time()
    glClearColor(0.0, 0.0, 0.0, 0.0)
    glClear(GL_COLOR_BUFFER_BIT)
    pygame.display.flip()
    now = time.time()
    latencies.append(now - frame_begin)
    for event in pygame.event.get():
        if event.type == QUIT:
            done = True
    if len(latencies) >= 1000:
        done = True
print "%6.2f" % (1000 * sum(latencies) / len(latencies))

I don’t know if it helps you, but I hope so at least. If you have the pygame and pyopengl modules you can test it yourself.