I was pointed to this article in the Project CARS 2 forum: https://devblogs.nvidia.com/vr-sli-accelerating-opengl-virtual-reality-multi-gpu-rendering/

I wanted to see if I(end-user) can set up my 2 SLI configured GTX 1070 cards to utilize this OpenGL multicasting code in order to improve frame rate by dedicating a GPU per eye.

If this is possible as it states in the article, please get me in touch with the correct resource to show me how to set that up or provide some steps I can follow to do it.



Same question here, but from a developer perspective. Using the extension (NV_gpu_multicast or the earlier NVX one) looks pretty straightforward. But nowhere does it tell you how to set it all up.

2 x 1080Ti here, 390.77 drivers, Win 10 x64 Enterprise. Both cards are part of a SLI group.

But neither multicast extension appear in the GL extension string.

Is there anything else to set up so that the driver enables the use of the multicast extensions ?


Well I’m going to answer my own question, since I got it working.

Turns out you need to set an undocumented environment variable before creating an OpenGL context in order to get multicast support. This probably serves as a switch selecting between regular SLI and ‘VR SLI’ (aka multicast). While that makes perfect sense technically, I would have expected to find this small but crucial bit of information mentioned at least somewhere in the docs.

Anyway, for reference, the envvars are:


Setting either one before creating a context will make the multicast extensions appear in the extension string. Without at least one of these envvars set, the extensions won’t appear even if the hardware supports VR SLI and everything is set up properly in the control panel. Standard SLI will be enabled in that case (which is pretty useless in VR).

As an interesting side note, setting the former envvar will also enable the NVX multicast extension on one of our AMD Radeon based dev systems running the latest Catalyst drivers.