App runs *only* with __GL_THREADED_OPTIMIZATIONS=1 --- what's it do?

(also in OpenGL…)
I’m trying to get a large commercial app running under Linux. It uses OpenGL in part via GtkGLLib (GTK+2 for vanilla UI stuff). It runs under Parallels using a parallels/nvidia driver, but on real machines (mine & customers) it runs only with __GL_THREADED_OPTIMIZATIONS=1 The app runs fine under parallels on several Linux distributions (Centos, Mint, Ubuntu, Kubuntu). Customers using various machines and OSs need __GL_THREADED_OPTIMIZATIONS=1 for it to work.

Without __GL_THREADED_OPTIMIZATIONS=1, the app core-dumps several nested calls under glxCreateContext. There’s some evidence from one core dump that the segfault was in one of the OTHER threads that are sitting around waiting for work. Calling XInitThreads immediately at app startup has no effect on the crash. It doesn’t matter if I add -lpthread before -lc either. I’m currently testing on a Centos 6.4 system with a GTX 650. Customers have crashes, including with Quadro boards, unless threaded optimizations are turned ON.

This suggests to me that __GL_THREADED_OPTIMIZATIONS=1 does some additional piece of initialization that I need… not obvious what. The app has its GTK and GL calls on the main thread. The app is extensively threaded for worker threads using pthreads, but the workers don’t do UI work (the app runs fine on OS X).

I’d like to have the app work with or without __GL_THREADED_OPTIMIZATIONS — though I could turn it on always and call it a day, that seems to have some sideeffects including potentially slowing the app down, and this sort of thing tends to bite one later. So I’d like to make it WORK reliably in both cases first, then decide which way to set it.

Please— any step-by-step of what __GL_THREADED_OPTIMIZATIONS=1 does and additional ideas? This is a major showstopper on my release, it’s otherwise basically good to go. Thanks.
[This file was removed because it was flagged as potentially malicious] (71.3 KB)

Seems a new feature up above 310.14 drivers according to links below:

[url]http://www.phoronix.com/scan.php?page=article&item=nvidia_threaded_opts&num=1[/url]
[url]Threaded optimization - OpenGL - Khronos Forums

It’s not all that much information, though. You didn’t mention you had looked at driver versions, so perhaps that’s a good starting point.

Nailed this… the NVidia drivers throw a signal during startup … which caused a sem_wait of mine to return unexpectedly with EINTR, leading to a segfault. Good thing I was able to drop the __GL_THREADED_OPTIMIZATIONS… it slows things down.

I’m glad you were able to resolve the problem.

There are a number of things your application could be doing that could be defeating the benefit of the threaded optimizations. The big one is calling glGetError(), but in generally any sort of glGet* call will cause a synchronization point between the threads. The other common source of slowdowns is calling functions that return data, such as glReadPixels or glMapBuffer. If you can avoid those, you are likely to speed up your application even when threaded optimizations are disabled.