Hi!
I have a “problem” with the VM console after implementing DDA.
When installed the drivers on the HyperV host and configured DDA on the host and assigned a GPU to the VM that part works fine.
After installing the drivers on the VM to install the GPU the drivers gets installed just fine.
But after installing and a reboot of the VM I cannot manage the VM through the hyper-V console and the screen goes black.
RDP on the VM works fine and the drivers are correctly installed on host and VM.
What am I doing wrong here?
Also do I need a license?
My setup is:
Server 2016 Datacenter
Hyper-V
HP Proliant DL380.
Nividia Tesla M10
128 GB
Profile: Virtualisation Optimisation.
I have tested version 4.7 NVIDIA GRID (on host and VM) and 6.2 NVIDIA Virtual GPU Software (on host and VM).
Also I have used these PS command for DDA:
#Configure the VM for a Discrete Device Assignment
$vm = "TS01"
#Set automatic stop action to TurnOff
Set-VM -Name $vm -AutomaticStopAction TurnOff
#Enable Write-Combining on the CPU
Set-VM -GuestControlledCacheTypes $true -VMName $vm
#Configure 32 bit MMIO space
Set-VM -LowMemoryMappedIoSpace 3Gb -VMName $vm
#Configure Greater than 32 bit MMIO space
Set-VM -HighMemoryMappedIoSpace 33280Mb -VMName $vm
#Find the Location Path and disable the Device
#Enumerate all PNP Devices on the system
$pnpdevs = Get-PnpDevice -presentOnly
#Select only those devices that are Display devices manufactured by NVIDIA
$gpudevs = $pnpdevs |where-object {$.Class -like "Display" -and $.Manufacturer -like "NVIDIA"}
#Select the location path of the first device that’s available to be dismounted by the host.
$locationPath = ($gpudevs | Get-PnpDeviceProperty DEVPKEY_Device_LocationPaths).data[0]
#Disable the PNP Device
Disable-PnpDevice -InstanceId $gpudevs[0].InstanceId
#Dismount the Device from the Host
Dismount-VMHostAssignableDevice -force -LocationPath $locationPath
#Assign the device to the guest VM.
Add-VMAssignableDevice -LocationPath $locationPath -VMName $vm
Kind regards