M10 for Video on VMWare running RDS 2016?


we’re running microsoft remote desktop 2016 servers hosted on vmware 6 using hp dl380 servers.

we’re having trouble with video performance (youtube, mediaplayer, vlc) on our remote sessions.

is nvidia grid, the M10 model, a practicable way to enhance video performance on remote desktop servers?

thank you!

best regards


Yes, the M10 will be fine for that sort of thing. The problems will either be in your browser choice, configuration, or your delivery protocol.

What protocol are you using to connect to the servers?



hi ben,

rdp 8 and 10 (from win7, 8 and 10 clients) and freerdp (from hp thinclients).

what problems are you expecting?

best regards

Mainly poor protocol optimisation / performance.

What issues are you experiencing? Stuttery image / dropped frames? Image quality? etc …


stuttering, bad framerates, muffled audio … it’s better on standalone 2012 server, but horrid on our 2016 farm.

it’s difficult to pin down to a specific issue though and underwhelming too since i’m not trying to handle complicated 3D renderings but acceptable video performance for a really small group of users.

getting graphic cards is pretty much my last option …

Yes, sounds like a combination of protocol and OS.

With just a basic RDP connection, your tuning options are extremely limited. Most people use Citrix, VMware, DCV or others when virtualizing graphics, as they have a lot more tuning options to fix things like this.

Believe it or not, delivering 3D renderings, Dassault and Autodesk products is easier than streaming a video. Streaming a video requires near perfect experience at all times. Dropped frames, image compression etc are much more noticeable when you’re watching a video. Delivering the above mentioned applications, you are much less likely to notice performance issues due to the nature of how the application is being used.

There’s a couple of Group Policies you should have set that make the RDSH use the GPU, have you set those?

Have a look under here: Windows Components > Remote Desktop Services > Remote Desktop Session Host > Remote Session Environment

absolutely. i have fiddled about with pretty much every gpo regarding rds and also remotefx.

no success.

interesting enough the video performance on a standalone 2012r2 server is better than on our 2016 rds farm.

anyways, i’ve seen that HPE is offering gpu for servers now as well, so i might give that a try and hope for the best …

Have you tried connecting with a different protocol, rather than RDP?


sorry for my late reply. i was able resolved this which makes me happy and proud.

and although i cannot pinpoint the exact gpo that was responsible for the bad video performance on said systems, i can now say that it was one in " Computer Configuration\Administrative Templates\Windows Components\Remote Desktop Services\Remote Desktop Session Host". most likely the microsoft basic renderer or prefer h264 setting thingie.

maybe this entry helps anybody else that ends up here after a google search.

however, i saw that HPE is offering nvidia graphic cards now too, so if i can get my hands on a test example i’m curious if we can further enhance video performance with gpus.

thanks for your input BJones!

best regards

You could help me (and many others) a lot if you can post your GPO settings here. I’m having the same struggle overhere en I am not able to figure out (yet) what GPO setting does the magic trick.

I’m having the same challenge…
I have a HP DL380 Gen9 with a NVIDIA M10 on VMWARE 6.5 and using MS RDP session host (windows 2016).
when I connect from a Windows 10 RDP client and watch a 4K movie in Chrome the CPU usage is at 80 / 90 %. I’ve made some changes tot the mentioned GPO settings but unfortunately until now this makes nog difference.

When I check the VMWARE host, I see this:

[root@vmware1:~] nvidia-smi vgpu -q -i 1
GPU 0000:87:00.0
Active vGPUs : 0

[root@vmware1:~] nvidia-smi vgpu
Tue Jul 11 08:03:48 2017
| NVIDIA-SMI 367.106 Driver Version: 367.106 |
| GPU Name | Bus-Id | GPU-Util |
| vGPU ID Name | VM ID VM Name | vGPU-Util |
| 0 Tesla M10 | 0000:86:00.0 | 1% |
| 84824 GRID M10-1A | 84825 VDI1 | 0% |
| 1 Tesla M10 | 0000:87:00.0 | 0% |
| 2 Tesla M10 | 0000:88:00.0 | 0% |
| 3 Tesla M10 | 0000:89:00.0 | 0% |

It seems the vGPU is not being used at all??


[root@vmware1:~] nvidia-smi vgpu -q
GPU 0000:86:00.0
Active vGPUs : 1
vGPU ID : 273044
VM ID : 273045
VM Name : VDI1
vGPU Name : GRID M10-1Q
vGPU Type : 39
vGPU UUID : 795e3763-3fdf-c61e-e0f0-964c1490d1bf
Guest Driver Version : 370.12
License Status : Unlicensed
Frame Rate Limit : 60 FPS
FB Memory Usage :
Total : 1024 MiB
Used : 1024 MiB
Free : 0 MiB
Utilization :
Gpu : 5 %
Memory : 0 %
Encoder : 0 %
Decoder : 0 %

GPU 0000:87:00.0
Active vGPUs : 0

GPU 0000:88:00.0
Active vGPUs : 0

GPU 0000:89:00.0
Active vGPUs : 0

RDPGW + Broker = Windows 2012 R2
RDP Session host (VDI1) is WIndows 2016

Yes, the drivers are installed, the license applied etc.
So the main question is: How can I get the vGPU to work in the RDP session?

regards Pascal

My settings won’t help you. I mostly use Citrix or other delivery protocols, but never native RDP.

The only GPO relating to Graphics I have set is:

"Windows Components/Remote Desktop Services/Remote Desktop Session Host/Remote Session Environmentshow - Use the hardware default graphics adapter for all Remote Desktop Services sessions"

Which has already been mentioned and you’ve already tried. If you have the same setup, same issue and are trying to achieve the same thing as ibishotline, then the resolution should be the same. In fact, the only difference I can see between you guys is your Hypervisor version. You both have identical setups other than that.

Have you got GPU acceleration enabled in Chrome?
Have you tried a different browser?
What do normal applications perform like?
How many monitors are you using and what resolution are they?
What’s your endpoint device specification?



Hi Beaker,

BJones is right. The GPO mentioned above is the only thing you need to set to have GPU usage for RDS sessions.
I’m running also pure Server 2016 RDSH with M10 without issues.
BTW: I would recommend to assign a M10-8A or Passthrough profile for a RDSH VM…

Here a screenshot: https://picload.org/view/rpirglda/server2016rdsh.jpg.html



Hey guys,

Thanks for all the input. I’ve been working on it since and It seems to work now!

First an answer on the questions:

  • GPU acceleartion is enabled in Chrome
  • Internet Explorer is not a webbrowser :-) so I skip that question
  • I use 1 monitor on 1920 x 1080
  • The devices we are using for RDP are HP6000 machines, with 4GB and booting with PXE boot to Thinstation
  • During the test I start the RDP session from a Windows 10 Desktop.

In the GPO I’ve turned on:
Use the hardware default graphics adapter for all Remote Desktop Services sessions: Enabled
Prioritize H.264/AVC 444 graphics mode for Remote Desktop Connections: Enabled
Configure H.264/AVC hardware encoding for Remote Desktop Connections: Enabled

When I log in to the RDP session as a user, start up chrome and watch a 4K movie in Theater modue on Youtube, I see vGPU utilization!

On the VMWARE Host I see:

Thu Jul 13 11:22:18 2017
| NVIDIA-SMI 367.106 Driver Version: 367.106 |
| GPU Name | Bus-Id | GPU-Util |
| vGPU ID Name | VM ID VM Name | vGPU-Util |
| 0 Tesla M10 | 0000:86:00.0 | 10% |
| 273044 GRID M10-1Q | 273045 VDI1 | 9% |
| 1 Tesla M10 | 0000:87:00.0 | 0% |
| 2 Tesla M10 | 0000:88:00.0 | 0% |
| 3 Tesla M10 | 0000:89:00.0 | 0% |

On the RDP User session I see:

C:\Program Files\NVIDIA Corporation\NVSMI>nvidia-smi.exe
Thu Jul 13 13:23:15 2017
| NVIDIA-SMI 370.12 Driver Version: 370.12 |
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| 0 GRID M10-1Q WDDM | 0000:02:02.0 On | N/A |
| N/A N/A P0 N/A / N/A | 279MiB / 1024MiB | 9% Prohibited |

| Processes: GPU Memory |
| GPU PID Type Process name Usage |
| 0 144 C+G Insufficient Permissions N/A |
| 0 1084 C+G …ost_cw5n1h2txyewy\ShellExperienceHost.exe N/A |
| 0 2292 C+G Insufficient Permissions N/A |
| 0 3108 C+G Insufficient Permissions N/A |
| 0 4640 C+G Insufficient Permissions N/A |
| 0 5336 C+G …indows.Cortana_cw5n1h2txyewy\SearchUI.exe N/A |
| 0 6548 C+G …x86)\Google\Chrome\Application\chrome.exe N/A |
| 0 6728 C+G C:\Windows\explorer.exe N/A |

On both sides the vGPU its doing is job.

I thought there would be more vGPU utilization when using Chrome, but Chrome still uses CPU power as well.

A better way to test the vGPU is by this website: http://madebyevan.com/webgl-water/
Then you see about 20% vGPU utilisation.

You recommend aM10-8A profile. I found this website: http://edwinhouben.com/blog/2017/01/25/why-should-we-deploy-gpus-in-hosted-desktop-deployments-part-3-architecture-profiles-guests/

It shows all the profiles for the M10. Unfortunately it does not tell what main different is between all the profiles… So It is a bit testing.

After all: the M10 card works in RDP SH. It still needs some finetuning. Tips for this stays more than welcome!



Glad it’s working. However YouTube as a test is not a very good way of assessing overall performance or capabilities, unless your users actually use it for part of their work.

Quite a few people seem to think that watching a YouTube video is a “basic” test, when actually (as I’ve said in another thread on here) I personally think it’s one of the most challenging things you can deliver remotely, as any imperfection in visual quality, frame rate or audio quality is immediately obvious. However, with a normal application like MS Word / MS Excel etc or even a heavy 3D CAD application, subtle changes in the overall experience are much less obvious to the end user, so these are typically a much better test to use unless watching YouTube is part of their work.

There’s a few reasons I think Simon suggested using the 8A Profile or Passthrough over the 1A (and he’s absolutely right in recommending it). Firstly, with an RDS / Terminal Server you’ll have multiple users sharing the same Frame Buffer allocation, so you want a lot of it so that it doesn’t get used up by a single user. Secondly, by assigning the full allocation of Frame Buffer for one of the M10s GPUs, you grant sole access of that GPU to that VM, and therefore only the users on that RDS VM have access to that specific GPU. Whereas if you allocate A ( = 1, 2, or 4GB) profiles to multiple RDS VMs, you can have multiple users from different VMs accessing the same GPU and potentially have contention issues if the RDS VMs are heavily loaded.

The same applies with Passthrough (which provides full Frame Buffer and dedicated GPU allocation), however be sure you know which applications you are going to provide, as Passthrough requires a different license.