We have a cross platform application (Windows/MacOsX/Linux) that displays openGl images, and streams them to a video file. For this to go fast, we disable the vertical synchronization.
This works fine for all graphic cards and platforms, except on Linux (e.g. Ubuntu) with Nvidia cards (e.g. Nvidia NVS 5200 M).
Under Linux, we use following code to disable vsync:
typedef int (APIENTRY * AAAAGLSWAPINTERVALEXTPROC)(int);
AAAAGLSWAPINTERVALEXTPROC wglSwapIntervalEXT=(AAAAGLSWAPINTERVALEXTPROC)glXGetProcAddress((const GLubyte*)"glXSwapIntervalSGI");
The function address wglSwapIntervalEXT can successfully be retrieved, but during rendering, following function takes very long, even with an almost empty scene:
What is going on?
Thanks for any insight