I’m facing an issue with NVIDIA GPUs in my Windows virtual machine (VM) using Hyper-V with Discrete Device Assignment (DDA). I’ve added two GPUs to my Hyper-V Windows VM, both recognized in Device Manager under Display Adapters. However, one GPU is working perfectly, while the other is encountering a Code 12 error.
Here’s what I’ve tried:
GPU Assignment with DDA:
I successfully added two GPUs to my Hyper-V Windows VM using the Discrete Device Assignment (DDA) method.
NVIDIA Driver Installation:
After adding the GPUs, I installed the NVIDIA drivers on the VM.
Device Manager Status:
Both GPUs are visible in Device Manager under Display Adapters.
Code 12 Error:
One GPU is working fine.
The second GPU, however, displays a warning and is not functioning properly.
The error message states: “This device cannot find enough free resources that it can use. (Code 12) If you want to use this device, you will need to disable one of the other devices on this system.”
5. Disable/Enable Test:
When I disable GPU1 adapter, GPU2 works fine.
Enabling GPU1 and disabling GPU2 vice versa also confirms the individual functionality of each GPU.
I’m unable to add multiple photos due to new account limitations. Therefore, I have compiled all the photos into a single image, which you can find here.
Question:
How can I overcome the Code 12 error and utilize both NVIDIA GPUs simultaneously in my Hyper-V Windows VM? Any insights, experiences, or tips from the community would be greatly appreciated.
Welcome @james.milne to the NVIDIA developer forums.
As to the original question, I am no expert in virtualization, but if you would encounter this kind of issue on a physical machine you might have problems with the BAR settings of your BIOS where the OS does not have enough address space for the GPU VRAM. Depending on the mainboard/CPU/RAM combination of the host you either need to configure that or the VM settings accordingly.
Can you verify if the Host can run both GPUs without issues? I would start there.
The aforementioned issues were successfully resolved by configuring MMIO space, as outlined in the official Microsoft document: Microsoft Official Document.
GPUs, particularly, require additional MMIO space for the VM to access the memory of that device. While each VM starts with 128 MB of low MMIO space and 512 MB of high MMIO space by default, certain devices or multiple devices may require more space, potentially exceeding these values.
Enabled Write-Combining on the CPU using the cmdlet: Set-VM -GuestControlledCacheTypes $true -VMName VMName
Configured the 32-bit MMIO space with the cmdlet: Set-VM -LowMemoryMappedIoSpace 3Gb -VMName VMName
Configured greater than 32-bit MMIO space with the cmdlet: Set-VM -HighMemoryMappedIoSpace 33280Mb -VMName VMName
To dismount the GPU devices from the host:
Located the device’s location path
Copied the device’s location path
Disabled the GPU in Device Manager
Dismounted the GPU devices from the host partition using the cmdlet: Dismount-VMHostAssignableDevice -Force -LocationPath $locationPath1 Dismount-VMHostAssignableDevice -Force -LocationPath $locationPath2
Assigned the GPU devices to the VM using the cmdlet: Add-VMAssignableDevice -LocationPath $locationPath1 -VMName VMName Add-VMAssignableDevice -LocationPath $locationPath2 -VMName VMName
The configuration of the VM for DDA has been successfully completed.
Both GPUs are now recognized in my Linux Hyper-V VM: