Dell R720 with a single GRID K1 Card
XenServer 6.2 SP1
Windows 7 VMs using Driver 332.83 with K140Q vGPU Profiles
I’ve been running with this setup for months and have had no general performance issues with the vGPU performance. Our needs include virtualizing some OpenGL applications and we’ve been using VNC and NoMachine to connect to the VMs (not ideal, but it works).
Yes, the client device has a 2 physical monitors connected to it.
I guess what I’m hoping for is an equivalent experience to connecting to a VM with a GPU passthrough configuration. For example, if I created a passthrough GPU VM with a Quadro K4000 or whatever, and physically connected two monitors to two physical ports on that card (or forced EDID settings in the Nvidia Control Panel), then when connecting via VNC or NoMachine, I would be presented with two independent monitors/display heads on the client machine.
Just wondering if there is a way to force a second "display head" on a vGPU VM somehow.
In that case, what you’re looking for isn’t possible.
The GRID cards have no physical ports and map to the virtual displays presented in the VM session. Unless VNC or NoMachine allow you to configure multiple virtual displays and present EDID to the NVidia driver, then you may be able to achieve it.
ASC, HP seems to has a internal utility called ForceEDIDs.exe that allows you to add virtual/fake displays to VMs. Look at this video: - YouTube NVIDIA also has a method to setup up fake displays using EDID information on NVIDIA Control Panel, but it is not available for GRID vGPUs profiles. Did you solve your problem? I didn’t solve mine yet. I’m using HP RGS to access remote VMs with GRID cards and XenServer 6.2 sp1 on a Dell R720.