vSphere 6.7, RDSH, GRID

Hi everyone,

I am sure this question comes up a lot, but I’m going to ask it again as there’s so much conflicting information out there.

We have a small cluster of HP DL380 Gen10 servers, running vSphere/vCenter 6.7. On these hosts are various RDSH workloads. Session-based RDS, not VDI, where one RDS server has multiple users logged in and working.

Is it possible/supported to purchase an M10, applicable licensing and make this available for use inside select RDSH virtual machines? Looking for a definitive answer as to whether this works, if so, what are the gotchas and what licensing is required for this use?

Thanks.

Hi

Yes this is fine. Any VM with a GPU attached needs an NVIDIA license. vGPU is licensed per CCU, and for RDSH you’ll need vApps licensing.

Best way to set that up is to use 1 GPU per RDSH VM. So with a single M10, you’d have 4 VMs each with an 8GB GPU using the 8A vGPU profile.

With RDSH, don’t forget to enable the GPU with the GPO: "Remote Session Environment - Use the hardware default graphics adapter for all Remote Desktop Services sessions".

Regards

Ben

Hi Ben,

Thank you for responding, it is much appreciated.

I read in the packaging and licensing guide that vApps only supports 1280*1024 as a resolution. We have some users with 4K RDS sessions, does that mean we need to look at vPC licenses instead?

Thanks.

Hi

The resolution from that document is a bit misleading. You’ll be fine with 4K and vApps / RDSH

How many users are running 4K, and do they have multiple monitors? (Dual 4K).

Regards

Ben

Hi Ben,

On one RDSH server there’s about 10 x single-screen 4K users, and another 10 users with dual-1080p screens. They’re not heavy graphics users and are using the VMware software driver / no GPU at the moment… There’s just a few 3D apps that really struggle without a GPU, and so end up having to be run outside the RDSH on a desktop with a basic/integrated graphics card.

Other servers are basic Chrome or other desktop style apps. No huge demand there either. It’s why I was only considering an M10.

The other option is passing through the graphics cards directly to the RDSH servers that require it (vDGA)… that won’t require any licensing from what I understand, but offers far less density with 1 graphics card @ 1 server. I’ve had a look at the VMware HCL and it only lists ESXi 6.5 U2 and lower for a DL380 Gen10 for any of the NV graphics cards (Imgur: The magic of the Internet)… whereas the vGPU pages by NV say that the M10/others are supported on ESXi 6.7, but don’t mention anything about the underlying server (VMware vSphere :: NVIDIA Virtual GPU Software Documentation). HPE does, however, say that the M10 is supported on the DL380 servers with their own product code (Q0J62A). It’s hard to verify what is actually compatible. Any thoughts?

Thanks.

Hi

Thanks for the info …

Using Passthrough will still require a license per CCU. To be blunt, there is no way to get around the NVIDIA licensing requirements (this being the typical reason people try to use it). The vGPU software has come a long way since the earlier releases, and with each new release, using Passthrough becomes far more of a hindrance (due to missing functionality) than a benefit, and vGPU should typically be the preference. Your density will be the same regardless of vGPU or Passthrough and there is little to no performance difference between them. Basically, don’t bother :-)

For the best features, performance and support, run the latest of everything. ESXi 6.7 U2, M10 GPUs with vGPU 8.0 (or newer if it’s available when you deploy) and Server 2019 RDSH.

If you want to add some future proofing and extra flexibility and functionality to your environment, replace the M10s with multiple T4 GPUs. For reference, 2x T4 will replace 1x M10.

You’ll use a single GPU from the M10 in each RDSH VM, giving 8GB of FB on each VM regardless of vGPU or Passthrough. Alternatively, you’ll split the T4s into 8GB profiles, modify the default scheduler to "Fixed" and have 2x RDSH VMs running on each, allowing for the same VM density but with additional feature benefits.

As for OEM sourced GPUs, from my experience, there is no difference between them and others sourced elsewhere that will make a difference to working or not. Some OEMs say that they have their own “special firmware” (or something similar) on the GPUs that allows them to work in their servers in an attempt to justify the price hike that they add on. But I’ve found that whoever supplies the GPU, it works just fine, so don’t feel like you have to purchase the GPU from your server OEM. If you do, make sure they discount their server hardware to sufficiently offset the GPU price hike :-)

As you have a few 4K users, make sure you keep an eye on the Framebuffer and Encoder utilisation on each of the GPUs, and test sufficiently before moving to a production deployment to ensure you have enough resources.

Regards

Ben

Hi Ben,

That is an outstanding response, thanks for taking the time. It sounds like vGPU is the way forward, so we’ll start looking into a proof of concept soon.

Thanks again.

Hi

No worries, glad the above was useful.

Get back to us if you have any other questions, otherwise, best of luck!

Regards

Ben