Lots of discussion happening lately due to the renewed fever over the vr revolution and I was recently reminded of VRSS2’s support for the HP reverb g2 omnicept and potentially other tobii based eye tracking devices.
OpenXR provides an abstraction for gaze, and eye tracking called * /user/eyes_ext and
I know that VRSS2 is driver level, but I wonder if there’s some benefit to nvidia’s driver listening to the openXR runtime to be more device agnostic and provide things like NIS for OpenVR and OpenXR without needing to inject in the compositor step.
I’m not sure about the latency chain all the way up, but right now there seems to be a back and fourth fight between gpu render latency and cpu render latency in applications with the potential for exceptionally high draw calls via huge numbers of shaders, like vrchat.
Beyond that, I would love for nvidia to take another look at some of the optimization that is possible to reduce the time to render a frame in OpenVR/OpenXR, as right now it’s not so much about visual fidelity, but finding a way to provide performance improvements to dx11 apps in social VR.
In vrchat for example, with a RTX 3090 there are VERY common situations where even a cpu like the 5800x with 32gb of ddr4 3600 CL16 will be unable to break 20fps at steam’s reccomended settings, and even after adjusting all AA levels and rendering, there is severe bottlenecking occurring somewhere. I wonder if that could be helped.
An engine optimization targeting that situation (huge numbers of shaders and materials) would be welcome, as it is the most popular and most important VR application in 2022.