It seems that with recent drivers (320.xx, maybe earlier) in one particular application (Solidworks) glGetQueryObjectiv( …, GL_QUERY_RESULT, &query) always places 100 in result, no mater what was actually done/requested.
Microsoft Windows 7 Enterprise x64 6.1.7601 Service Pack 1 Build 7601;
NVIDIA Quadro 2000, driver version 126.96.36.1999 (also confirmed on Quadro FX 1800);
Solidworks 2013 x64 Edition;
Our product is plugin-alike application, which, when loaded, embeds own graphics into CAD rendering context by injecting OpenGl API calls on particular events. The forementioned behavior ruins most of our OGL 3.3 effects that are based on transform feedback and corresponding queries.
We first thought of it as a bug. But there were two major evidences to the contrary:
- Very much the same code injected into other CAD applications (or launched-stand alone) works nicely;
- After short dig, we found the following instructions to be in the glGetQueryObjectiv’s implementation:
nvoglv64 + 0x632081 (glGetQueryObjectiv + 0x21h):
test dword ptr [nvoglv64!DrvPresentBuffers+0xf05450],80000h
At this instruction a bitfield tested against 80000h flag, which is raised for Solidworks (and not for other applications). Shortly after next ‘je’ instruction control falls to “short branch” in which it eventually reach this instruction:
nvoglv64 + 0x6320b1 (glGetQueryObjectiv + 0x51h):
mov dword ptr [r8],64h
Where it places 100 (64h) at the adsress of passed argument and exits the function.
So it seems to be some kind of optimization with weird consequences for us. Is there any workaround for this? Can we make this function work in consistent with the specs without messing around opcodes/bitfields, etc?
Thanks in advance,