Hi there, I’m trying to integrate the GL_NV_stereo_view_rendering extension (Single Pass Stereo) in my own VR engine, but I’m confused about how to match the effect of the proper eye-separation (eye pose) and asymmetrical projection, as would normally be obtained from OpenVR (with Vive).
According to the extension’s documentation (https://www.khronos.org/registry/OpenGL/extensions/NV/NV_stereo_view_rendering.txt), the only thing you can do with the second output position of the vertex shader (gl_SecondaryPositionNV) is to modify the x coordinate (yzw are taken as-is from gl_Position, whereas those components of gl_SecondaryPositionNV are thrown away). I’ve confirmed this is how it works, experimentally. The VRWorks gl_stereo_view_rendering demo code simply uses a symmetrical frustum (from a perspective() call) and then shifts a bit left or right in the vertex shader. I know that this kind of sideways offset (which doesn’t touch the W component) does create a kind of off-axis frustum, but it’s not the same as the one I want – at least I’m not sure how to get from one to the other.
Maybe I’m wrong about the math, but I’m not sure it’s possible to find a simple X offset that would give the proper left and right eye ViewProj matrices – “proper” meaning equivalent to those normally returned for the Vive by OpenVR, or for the Rift by the Oculus SDK. On the other hand, since the VRWorks stuff has been integrated into various big game engines, I assume it must be possible to do it right… Or are they all just using a “good enough” approximation, and not trying to recreate the exact same HMD view-projection matrices that the Vive wants?
If anyone’s had experience with Single-Pass Stereo with OpenVR or Oculus SDK (or anything that does stereo properly, using an asymmetrical frustum), any suggestions would be appreciated.