I am using a Grid K2 in a Dell R720 running ESXi 6.0, with VMware Horizon View 6.2. I have configured 4 VMs in vGPU mode using the K240Q profile. Using "nvidia-smi" command, I observed the mapping of the 4 VMs to the GPUs are:
VM1 - GPU 1, VM2 - GPU 0, VM3 - GPU 1, VM4 - GPU 0
Can I change this mapping (or, Affinity) between a VM & the GPU? What command can I use?
Another related question is: how are the two GPUs in Grid K2 card share the PCIe bus? Is there a PCIe switch inside the Grid K2?
My apologies if these questions were already answered. I was browsing through the forum pages and couldn’t find any related questions/answers. Thanks!
This is in the documentation included in the driver bundle.
On vSphere you cannot specify where each VM is placed, but you can determine the placement policy. Default is what you have at present, the alternate option is to load up each GPU until it’s fully loaded, then move to the next. In that instance, all 4 VM’s would be on the same GPU.
To set it,
On each ESX host, add "vGPU.consolidation = true" to file /etc/vmware/config