I’m trying to get a large commercial app running under Linux. It uses OpenGL in part via GtkGLLib (GTK+2 for vanilla UI stuff). It runs under Parallels using a parallels/nvidia driver, but on real machines (mine & customers) it runs only with __GL_THREADED_OPTIMIZATIONS=1 The app runs fine under parallels on several Linux distributions (Centos, Mint, Ubuntu, Kubuntu). Customers using various machines and OSs need __GL_THREADED_OPTIMIZATIONS=1 for it to work.
Without __GL_THREADED_OPTIMIZATIONS=1, the app core-dumps several nested calls under glxCreateContext. There’s some evidence from one core dump that the segfault was in one of the OTHER threads that are sitting around waiting for work. Calling XInitThreads immediately at app startup has no effect on the crash. It doesn’t matter if I add -lpthread before -lc either. I’m currently testing on a Centos 6.4 system with a GTX 650. Customers have crashes, including with Quadro boards, unless threaded optimizations are turned ON.
This suggests to me that __GL_THREADED_OPTIMIZATIONS=1 does some additional piece of initialization that I need… not obvious what. The app has its GTK and GL calls on the main thread. The app is extensively threaded for worker threads using pthreads, but the workers don’t do UI work (the app runs fine on OS X).
I’d like to have the app work with or without __GL_THREADED_OPTIMIZATIONS — though I could turn it on always and call it a day, that seems to have some sideeffects including potentially slowing the app down, and this sort of thing tends to bite one later. So I’d like to make it WORK reliably in both cases first, then decide which way to set it.
Please— any step-by-step of what __GL_THREADED_OPTIMIZATIONS=1 does and additional ideas? This is a major showstopper on my release, it’s otherwise basically good to go. Thanks. (also posted to Linux forum).
[This file was removed because it was flagged as potentially malicious] (71.3 KB)