GPU card, licenses, technology (vDGA etc) for Horizon View

Hello,

I’m digging through the information on different NVIDIA sites. We’re creating a business case for physical GPU cards. We’re running Horizon View and have an average of 150 ccu’s. Of those users about 100 need basic GPU performance and about 50 need a higher amount of GPU performance. Is it even possible to have users with different amounts of available GPU power?

What current technologies are there in terms of GPU sharing within VMware Horizon (vDGA etc.), which card(s) would work best for above scenario and which licenses would we need?

I have found the following sources, but it is still quite unclear:

A100, L40, L4, A30 and A16 seem to be worth looking into.

and the buy-grid/ guide, which I can’t link because new users can only provide one single link per post…
The licensing model doesn’t make much sense without any context as to when which license is required. vPC, vWS, vPC and vApps. Or perhaps the licensing model when installing on-prem GPU cards is totally different?
It’s all just a big blur right now.

Thanks for any clarification.

Hi ST84,

Thanks for the question.

What current technologies are there in terms of GPU sharing within VMware Horizon (vDGA etc.), which card(s) would work best for above scenario and which licenses would we need?

vGPU is NVIDIA’s technology for GPU sharing for virtual desktops. vGPU is a licensed software and is licensed based on CCU. For virtual desktops L4, L40, A16 or A40 would be the current options. The “L” and “A” refer to different generations of GPU - “L” for ada Lovelace being the latest technology.

For GPU sharing for compute tasks like HPC, AI, Deep Learning, the A100 and A30 can also be used but require a different licensing scheme called NV AI Enterprise. I don’t think you are looking for this.

Of those users about 100 need basic GPU performance and about 50 need a higher amount of GPU performance.

vPC is our license/technology for basic GPU acceleration aimed at Knowledge workers i.e. users who are using Windows 10 OS and basic productivity is improved with GPU acceleration. vPC can provide a better quality of experience to these users. A16 is the most commonly used GPU for vPC installations.

vWS is our license/technology for users who are running 3D graphics accelerated applications - i.e. anything ranging from AutoCad to more complex 3D applications like Maya. This lengthy guide provides information on how to size a system sizing-guide-nvidia-rtx-virtual-workstation.pdf, although our basic advice is to ask what GPU do your users use now for these applications and then relate the physical GPU to vGPU profile proxy. The GPU choice will depend on your user needs but vWS is supported on A16/L4 and L40. A16 would be considered an entry level GPU.

Assuming your definition of basic and higher amount of graphics usage matches ours you would be looking at 100 vPC licenses and 50 vWS licenses.

To get started I recommend that you work with an NVIDIA partner who is familiar with vmware, NVDIAI vGPU and the various OEM server vendors who offer NVIDIA Certified Systems. You can locate a partner here: Find an NVIDIA Partner | NVIDIA Search via compentancy “NVIDIA Virtual Desktops”.

A partner will be able to guide you through the various GPU options, density, server considerations, ESXi versions etc.

-D-

Hi DougT,

what I’m trying to wrap my head around is:
Example;
We install two A40 cards in our hosts, what difference, other than the dollar amount, does it make if we now buy x-amount of vPC licenses or x-amount of vWS licenses? I guess what I’m trying to ask is, are the cards software-throttled based on what licenses we activate?

Thanks!

For GPU sharing for compute tasks like HPC, AI, Deep Learning, the A100 and A30 can also be used but require a different licensing scheme called NV AI Enterprise.

Sorry to interject, but does this mean that the software required for sharing GPU compute resources remotely on the A100 is effectively unavailable to non-business customers?
For example, if I wanted to provide some friends with remote access to my A100 when they would like to experiment with some AI things, I would be unable to do so with the device I purchased for personal use?
Is it even possible for an individual to subscribe to AI Enterprise if they are not doing so through a business account, even if they were to pay whatever the price might be?
Does NVIDIA provide access to the features of products in any other manner when the features of a product otherwise require AI Enterprise? Or would customers purchasing these devices without AI Enterprise be stuck with an arbitrarily partial product?

Hi, is it possible to understand if buying a couple of A30 to be mounted on a single server, used to offer kubernetes on bare metal, these GPU can be used by multiple users contemporarily, or this really require NV AI Enterprise? From the discussion above, it seems that this license is required only if VMs are used.
Resellers seem to not have clear ideas on this simple question, and provide different answers…

A30 in baremetal and MIG usage doesn’t require NV AI Enterprise licensing. For sure, it can be added in addition to have the enterprise support and access to the NGC catalog (pretrained models, workflows…)