I have 4 x UCS B200M4 blade servers each with an NVIDIA TEsla M6 card installed. I can power on vGPU enabled VMs on all four of the hosts and they run fine, however I am only able to migrate between 2 of the 4 hosts. All are running ESXi 7.0 U3 and are running driver version 535.129.03. Running nvidia-smi vgpu -m on each host shows the 2 that I can migrate between with Migration capability:Yes, the other 2 show Migration capability:No. How can I change this so I can migrate between all 4 hosts?
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| vSphere 6.7 up1 - VMotion | 7 | 9320 | October 31, 2018 | |
| Cannot migrate a vm with a vGPU from ESXi 7 host to ESXi 8 host | 1 | 284 | June 29, 2025 | |
| ESX 6.7 update 3 with VIB version - NVIDIA_bootbank_NVIDIA-VMware_ESXi_6.7_Host_Driver_450.80-1OEM. | 2 | 2091 | October 15, 2020 | |
| vMotion and different NVIDIA GPU models | 3 | 2539 | June 28, 2019 | |
| Insufficient resources. One or more devices (pciPassthru0) required by VM xxx are not available on host yyy | 9 | 13729 | November 3, 2021 | |
| A40 with ESXi 7 | 5 | 2469 | February 25, 2022 | |
| Linux KVM Live Migration for Tesla T4 Problem | 2 | 2033 | June 11, 2024 | |
| XenServer vMotion | 5 | 5741 | April 25, 2018 | |
| Is live vGPU migration available with Linux vgpu-vfio driver? | 4 | 487 | April 11, 2020 | |
| Vfio migration supported GPU cards info | 0 | 1051 | January 7, 2022 |