I need to do research about virtual GPUs and I need to run it on my PC so I can test and get screenshots to experiment and get results.
So I got a free 90-day license from NVIDIA vGPU to do this, but as I read, I found it maybe needs to servers and the like. I’m weak in this area. I’ve already installed VMware Workstation and VirtualBox before on my PC, but not servers and so on.
So I’m almost confused.
I’m still a beginner to know servers, including virtualization, and I need to use the NVIDIA Virtual GPU license on my desktop (Windows 10), which in turn needs VMware, but I do not know how to set it on my device.
Click on the "Get Started" tab and complete the required details. You’ll then receive an email and will be able to access a GPU accelerated VM that’s cloud hosted.
For the evaluation, you don’t need any licenses or VMWorkstation. Simply follow the instructions once you’ve completed the "Get Started" form.
for FREE TEST DRIVE, you don’t need to download anything.
you will be provided access to a virtual machine(server/a computer running somewhere else and you can connect to that system with your pc) which will have GPU.
goto the Nvidia’s free trial page and get started. You will understand everything.
So we have a ~200 user office which operates on Microsoft VDI (virtual desktop infrastructure) using HyperV and Remote Desktop brokering and virtualization hosts.
Relevant to this query, we have 3x Virtualization hosts supporting the ~200 VM’s for VDI use. They do NOT have a GPU in them. VidMateTeatvShareit
Increasingly, users are running more and more applications (and even browsers with videos/interactive content) that is lagging due to the alk of GPU acceleration. This is offloaded to the CPU’s, which is causing an increasing trend in CPU usage and more and more spikes that affect the whole userbase.
I’ve been tasked with looking at GPU acceleration for the VDI environment. Which is fine, there is quite the range of Tesla GPU’s that I can use in my env with carying specs etc.
However… Management are insisting that the cost of the Tesla GPU’s is too high, and its not feasible to offload that cost increase onto the client. I argue that that’s not really my problem, lol. But management insist that it MUST be possible to sue consumer grade GPU’s in the servers to provide acceleration.
I told them this is not possible and to use nvidia vgpu management software, you need to have a tesla core’d GPU.
We are at an impasse, and I’m stuck scratching my head, becuase they still believe we can use consumer cards.
So, please clarify for me, can you, or can you not, use consumer grade GPU’s (read as RTX2080’s, RTX2070’s etc) in a server and then have nvidia vgpu manager successfully segment those GPU’s into devices mapped to VDI VM’s.
The quick answer for your management team is no, it’s not possible to use GeForce GPUs with vGPU. They are not designed to work together or supported to do so.
Certain Quadro and Tesla GPUs are designed for Professional and Enterprise use with vGPU. However, as you are using Hyper-V, unfortunately you can’t use vGPU at all (Hyper-V doesn’t support it), so you would have to use DDA (Passthrough) with the GPUs instead. This is ok for RDSH deployments, but for VDI, a different combination of technologies would be required.
Geforce GPUs don’t support vGPU. But datacenter GPUs won’t support it either with Hyper-V. You would need to run ESX, Nutanix or KVM or use Azure Stack HCI which is currently the only “on prem” option from MSFT to support GPU-P which is the MSFT implementation to slice GPUs