So we have a ~200 user office which operates on Microsoft VDI (virtual desktop infrastructure) using HyperV and Remote Desktop brokering and virtualization hosts.
Relevant to this query, we have 3x Virtualization hosts supporting the ~200 VM’s for VDI use. They do NOT have a GPU in them. VidMate Teatv Shareit
Increasingly, users are running more and more applications (and even browsers with videos/interactive content) that is lagging due to the alk of GPU acceleration. This is offloaded to the CPU’s, which is causing an increasing trend in CPU usage and more and more spikes that affect the whole userbase.
I’ve been tasked with looking at GPU acceleration for the VDI environment. Which is fine, there is quite the range of Tesla GPU’s that I can use in my env with carying specs etc.
However… Management are insisting that the cost of the Tesla GPU’s is too high, and its not feasible to offload that cost increase onto the client. I argue that that’s not really my problem, lol. But management insist that it MUST be possible to sue consumer grade GPU’s in the servers to provide acceleration.
I told them this is not possible and to use nvidia vgpu management software, you need to have a tesla core’d GPU.
We are at an impasse, and I’m stuck scratching my head, becuase they still believe we can use consumer cards.
So, please clarify for me, can you, or can you not, use consumer grade GPU’s (read as RTX2080’s, RTX2070’s etc) in a server and then have nvidia vgpu manager successfully segment those GPU’s into devices mapped to VDI VM’s.
Thanks.