Discrete Device Assignment questions

I am using Discrete Device Assignment to assign a Nvidia Quadro M2000 GPU to a 8 Core Virtual Machine (HyperV 2016) with Windows 10 as guest OS. I have used below Nvidia documentation for inspiration:

https://docs.nvidia.com/grid/5.0/grid-vgpu-user-guide/index.html

And this Microsoft Doc:

https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/plan/plan-for-deploying-devices-using-discrete-device-assignment

But I have a few questions to the Nvidia doc

Q1: The documentation for 3. Using GPU Pass-Through states: ‘On supported versions of Microsoft Windows Server with Hyper-V role, you can use Discrete Device Assignment (DDA) to enable a VM to access a GPU directly.’

Is the list of supported/functional GPUs, the same as the vGPU list? Or will/can lower end GPUs work as well?

Q2: The documentation section 3.3. Using GPU Pass-Through on Microsoft Windows Server propose that one looks up the location path in Device Manager and the GPU one wants to assign to a VM. This imply that a Nvidia Graphics Driver HAS been installed on the Host.

Does Nvidia GPU’s require Graphics Driver(s) on the host (say to enable something) or is it sufficient to install a Graphics Driver on each Virtual Machine (Guest)?
(Microsoft docs states: As Discrete Device Assignment passes the entire PCIe device into the Guest VM, a host driver is not required to be installed prior to the device being mounted within the VM. The only requirement on the host is that the device’s PCIe Location Path can be determined.)

Q3: In my Powershell I set some conservatively LARGE values for LowMemoryMappedIoSpace and HighMemoryMappedIoSpace.
How precise should these values be? Can I find it somewhere (Powershell preferred) or is it sufficient that they are ‘big enough’?

My VM Settings for the M2000 test were:

AllNVidia = Get-PnpDevice | Where-Object {.Class -eq “Display”} | Where-Object {$.Service -eq “nvlddmkm”}
Disable-PnpDevice -InstanceId $MyNVidia[0].InstanceId -Confirm:$false
$DataOfGPUToDismount = Get-PnpDeviceProperty DEVPKEY_Device_LocationPaths -InstanceId $MyNVidia[0].InstanceId
$locationpath = ($DataOfGPUToDismount).data[0]
Dismount-VmHostAssignableDevice -locationpath $locationpath -force

Aha, there you are!

Get-VMHostAssignableDevice

$vmName = “DKTA-0103SC0099”

Power off the VM

Stop-Vm -Name $vmName

Cannot SaveState, so TurnOff, when Host stops

Set-VM -Name $vmName -AutomaticStopAction Shutdown

Assign GPU to VM

Add-VMAssignableDevice -LocationPath $locationpath -VMName $vmName

#Enable Write-Combining on the CPU
Set-VM -GuestControlledCacheTypes $true -VMName $vmName
#Configure 32 bit MMIO space
Set-VM -LowMemoryMappedIoSpace 3Gb -VMName $vmName
#Configure Greater than 32 bit MMIO space
Set-VM -HighMemoryMappedIoSpace 33280Mb -VMName $vmName

Done, WM Has the GPU now

Start-Vm -Name $vmName -ComputerName DKSR-TESTHV-13

Hi jm62xgz,

Not sure if you have GRID license, if the answer is Yes, please use the website http://nvid.nvidia.com/dashboard/ to report any bugs on developing

If you haven’t purchased GRID license or only have the test license, please ask question to this email address ChinaEnterpriseSupport@nvidia.com

Thanks