NVIDIA GPU Not Recognized in Windows Hyperv VM - Code 12 Driver Error

Hey community,

I’m facing an issue with NVIDIA GPUs in my Windows virtual machine (VM) using Hyper-V with Discrete Device Assignment (DDA). I’ve added two GPUs to my Hyper-V Windows VM, both recognized in Device Manager under Display Adapters. However, one GPU is working perfectly, while the other is encountering a Code 12 error.

Here’s what I’ve tried:

  1. GPU Assignment with DDA:

    • I successfully added two GPUs to my Hyper-V Windows VM using the Discrete Device Assignment (DDA) method.
  2. NVIDIA Driver Installation:

    • After adding the GPUs, I installed the NVIDIA drivers on the VM.
  3. Device Manager Status:

    • Both GPUs are visible in Device Manager under Display Adapters.
  4. Code 12 Error:

    • One GPU is working fine.
    • The second GPU, however, displays a warning and is not functioning properly.
    • The error message states: “This device cannot find enough free resources that it can use. (Code 12) If you want to use this device, you will need to disable one of the other devices on this system.”

5. Disable/Enable Test:

  • When I disable GPU1 adapter, GPU2 works fine.
  • Enabling GPU1 and disabling GPU2 vice versa also confirms the individual functionality of each GPU.

I’m unable to add multiple photos due to new account limitations. Therefore, I have compiled all the photos into a single image, which you can find here.

Question:
How can I overcome the Code 12 error and utilize both NVIDIA GPUs simultaneously in my Hyper-V Windows VM? Any insights, experiences, or tips from the community would be greatly appreciated.

Thanks in advance!

I have the same issue, but my issue is impacting both GPU cards.

I would be interested if you could share the DDE settings that you used for the gpu card that is working?

I’ve also tried different OSes, such as Windows 10, Ubuntu 18.04/20.04 without any luck.

1 Like

Welcome @james.milne to the NVIDIA developer forums.

As to the original question, I am no expert in virtualization, but if you would encounter this kind of issue on a physical machine you might have problems with the BAR settings of your BIOS where the OS does not have enough address space for the GPU VRAM. Depending on the mainboard/CPU/RAM combination of the host you either need to configure that or the VM settings accordingly.

Can you verify if the Host can run both GPUs without issues? I would start there.

1 Like

Thanks @MarkusHoHo and @james.milne for your responses.

The aforementioned issues were successfully resolved by configuring MMIO space, as outlined in the official Microsoft document: Microsoft Official Document.

GPUs, particularly, require additional MMIO space for the VM to access the memory of that device. While each VM starts with 128 MB of low MMIO space and 512 MB of high MMIO space by default, certain devices or multiple devices may require more space, potentially exceeding these values.

Subsequently, I reconfigured the VM following the instructions in the Microsoft Official Document: VM Preparation for Graphics Devices.

  • Enabled Write-Combining on the CPU using the cmdlet:
    Set-VM -GuestControlledCacheTypes $true -VMName VMName

  • Configured the 32-bit MMIO space with the cmdlet:
    Set-VM -LowMemoryMappedIoSpace 3Gb -VMName VMName

  • Configured greater than 32-bit MMIO space with the cmdlet:
    Set-VM -HighMemoryMappedIoSpace 33280Mb -VMName VMName

To dismount the GPU devices from the host:

  • Located the device’s location path
  • Copied the device’s location path
  • Disabled the GPU in Device Manager

Dismounted the GPU devices from the host partition using the cmdlet:
Dismount-VMHostAssignableDevice -Force -LocationPath $locationPath1
Dismount-VMHostAssignableDevice -Force -LocationPath $locationPath2

Assigned the GPU devices to the VM using the cmdlet:
Add-VMAssignableDevice -LocationPath $locationPath1 -VMName VMName
Add-VMAssignableDevice -LocationPath $locationPath2 -VMName VMName

The configuration of the VM for DDA has been successfully completed.

Both GPUs are now recognized in my Linux Hyper-V VM:

root@DEB-HYPERV-6fabb3a422fb6e499b57dd2e11a7aa59:~# lspci
0000:00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX Host bridge (AGP disabled) (rev 03)
0000:00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 01)
0000:00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
0000:00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 02)
0000:00:08.0 VGA compatible controller: Microsoft Corporation Hyper-V virtual VGA
0000:00:0a.0 Ethernet controller: Digital Equipment Corporation DECchip 21140 [FasterNet] (rev 20)
076a:00:00.0 3D controller: NVIDIA Corporation TU104GL [Tesla T4] (rev a1)
e95f:00:00.0 3D controller: NVIDIA Corporation TU104GL [Tesla T4] (rev a1)
root@DEB-HYPERV-6fabb3a422fb6e499b57dd2e11a7aa59:~#

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.