vGPU - can i overprovision ?

Hello,

We are gradually moving our vDGA to vGPU.

I am putting some people on k260… some on k280…etc… depending on needs.

I just put someone on k260 today and i swore their machine said they had 4gb ?.. it was via dxdiag… I was checking their directx ver. Shouldn’t k260 move them to 2gb ? Should i see 2gb on their OS for k260q? or does it show 4gb… and VMware just partitions 2gb for that user ?

Sorry Main question :)… Do i need to manually track what every user is at for profiles and stop assigning k2 cards when i see they are full ?.. It wont let me know will it ?.. Or just not let me assign anymore on that card ?

For example… lets say i have One k2 card in a host. What would happen if i assigned 5 people to that card at k280Q ?.. obviously that’s more than the card could handle… with that being 4gb… so two is all it could handle. Would it even let me assign it to 5 ?..

My guess is it would power up the 1st two people that have one…and stop the other 3 from booting up… ?? or no ?

thanks -

  • Anyone else not getting emails when people reply to their posts ?.. i click the green subscribe option on my post… i never seem to get told when there’s a reply :(.

(no clue on email notification, let me know if it works with this response)

Correct, in your example the 3rd user and so on would not boot and you would see an insufficient graphics resources error on each. The default distribution method for vSphere is breadth over depth, in other words the 1st guest is powered up on an available host in the pool, the 2nd guest is powered up on the next host, and so on, to keep from loading up a single host. This round robin will get out of balance as you scale up and users are disconnecting and so on, but the general idea is to put guest performance over density on a host.

On the K260Q showing 4GB, should not be possible, the guest is being allocated 2GB and so any tool on the guest should also show that amount.

Consider also GPU passthrough if you have large numbers of users that come and go in more volatile ways rather than assigning vGPUs to specific VMs. YMWV, so the only way to know for sure works better is to try it out in your environment.