Finding the correct (x-only) offset for Single Pass Stereo with HMD

Hi there, I’m trying to integrate the GL_NV_stereo_view_rendering extension (Single Pass Stereo) in my own VR engine, but I’m confused about how to match the effect of the proper eye-separation (eye pose) and asymmetrical projection, as would normally be obtained from OpenVR (with Vive).

According to the extension’s documentation ([url]https://www.khronos.org/registry/OpenGL/extensions/NV/NV_stereo_view_rendering.txt[/url]), the only thing you can do with the second output position of the vertex shader (gl_SecondaryPositionNV) is to modify the x coordinate (yzw are taken as-is from gl_Position, whereas those components of gl_SecondaryPositionNV are thrown away). I’ve confirmed this is how it works, experimentally. The VRWorks gl_stereo_view_rendering demo code simply uses a symmetrical frustum (from a perspective() call) and then shifts a bit left or right in the vertex shader. I know that this kind of sideways offset (which doesn’t touch the W component) does create a kind of off-axis frustum, but it’s not the same as the one I want – at least I’m not sure how to get from one to the other.

Maybe I’m wrong about the math, but I’m not sure it’s possible to find a simple X offset that would give the proper left and right eye ViewProj matrices – “proper” meaning equivalent to those normally returned for the Vive by OpenVR, or for the Rift by the Oculus SDK. On the other hand, since the VRWorks stuff has been integrated into various big game engines, I assume it must be possible to do it right… Or are they all just using a “good enough” approximation, and not trying to recreate the exact same HMD view-projection matrices that the Vive wants?

If anyone’s had experience with Single-Pass Stereo with OpenVR or Oculus SDK (or anything that does stereo properly, using an asymmetrical frustum), any suggestions would be appreciated.

Thanks,
Glen.

To answer myself…

So I “slept on it”, and this morning realized there is no need to use the simple “offset” trick used by the example program. Instead, I simply send both MVP matrices as uniforms (an array of two), and use the appropriate one to calculate gl_Position (or gl_SecondaryPositionNV) for each eye, accordingly. That works fine, and I get the same result as regular two-pass rendering would, using the HMD’s provided eye pose/projection matrices.

But it seems what you can’t do with Single-Pass Stereo (as implemented by this extension) is compute correct lighting for each eye(?). I suppose one must choose use a single (middle/“forehead”, or one eye) ModelView matrix to use when computing the eye-space position and normal for lighting…because no other data is view-specific (besides gl_SecondaryPositionNV). I guess we might try to rely on gl_Layer being 0 or 1 in the fragment shader, but normally the interpolated eye-space position and normal are calculated and sent from the vertex shader…