NVIDIA L4 GPU Hyper-V - Need Advice/Help

We are experiencing issues with our NVIDIA L4 GPU cards when attempting to use them with Discrete Device Assignment (DDA) on Windows Server Hyper-V VMs.

We followed the DDA setup steps as per the official documentation, and the process completed successfully without any errors.

  • OS: Windows Server 2022
  • GPU: NVIDIA L4 x 3
  • CPU: Intel Xeon Gold 6542Y (with VT-d enabled)
  • IOMMU and SR-IOV: Enabled in BIOS
  • Data Center L4 Driver for Windows 560.94 | Windows Server 2022
    drivers are installed on the Windows server host.
  • Driver Details | NVIDIA
  1. However, the GPU is not being detected within the Hyper-V guest VM, which is running Debian 12 with Backport Kernel 6.11. This has been verified using the lspci command, which does not show the GPU.
  2. Notably, we have successfully used NVIDIA RTX GPUs in similar setups on Hyper-V VMs with DDA passthrough, without requiring any additional licenses or configurations.

Request for Clarification:

  • Are there specific additional steps or configurations required to enable DDA passthrough for the NVIDIA L4 GPU cards?
  • Does the L4 GPU require a vGPU license (GRID License) or any special driver/firmware for such use cases, even for single VM passthrough (non-vGPU setup)?
  • What are the key differences between the RTX GPUs and the Tesla L4 GPUs that might explain this behavior?

Hi All,

I wanted to provide an update regarding the issue I faced with NVIDIA L4 GPUs on Windows Server Hyper-V VMs using Discrete Device Assignment (DDA). I was able to resolve this issue successfully by following the guidance provided in the NVIDIA documentation:

Bug #2812853: Microsoft DDA not working with some GPUs.

The problem occurred because GPUs with more than 16 GB of memory require additional MMIO (Memory-Mapped Input/Output) space for proper mapping in the guest VM. Without this configuration, the GPU wouldn’t be detected properly in the VM.


Solution:

The workaround involves allocating sufficient HighMemoryMappedIoSpace for the VM based on the GPU’s BAR1 memory size and the number of GPUs assigned to the VM. Here’s the step-by-step process:

  1. Use the following formula to calculate the required MMIO space:

    MMIO space=2×gpu-bar1-memory×assigned-gpus

Where:

  • gpu-bar1-memory: The amount of BAR1 memory for one GPU (equal to total GPU memory if not specified).
  • assigned-gpus: The number of GPUs assigned to the VM.
  1. Assign the calculated MMIO space to the VM using the Set-VM PowerShell command on the Hyper-V host.

Example: NVIDIA L4 GPU with 23 GB Memory

For a VM with 1 GPU assigned, where each GPU has 23 GB of BAR1 memory:

MMIO space=2×23 GB×1=46 GB

PowerShell Command:

Run the following on the Hyper-V host to set the required MMIO space:

Set-VM –HighMemoryMappedIoSpace 46GB –VMName <VM_Name>

Example: Multiple GPUs Assigned

For 3 NVIDIA L4 GPUs, each with 23 GB of memory assigned to a single VM:

MMIO space=2×23 GB×3=138 GB

PowerShell Command:

Run the following to set the MMIO space for the VM:

Set-VM –HighMemoryMappedIoSpace 138GB –VMName <VM_Name>

Verification:

Once the MMIO space is configured, reboot the VM and check that the GPU is recognized correctly in the guest VM using tools like nvidia-smi or lspci (for Linux VMs).

I hope this helps anyone facing similar issues. If you have additional questions, feel free to ask!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.