I’ve installed XD 7.15 VDA on a Windows 10 1709 VM. This VM had a GRID P6 GPU assigned to it (P6-2Q), the 6.1 GRID driver set installed and as part of the 7.15 VDA install, I chose the HDX 3DPro option. I have applied the Very High Definition User Experience template to my VM which I access as a single virtual machine in its own Delivery Group. When I connect to this VM using two 4K panels and the Linux based Receiver (13.9.1.6) I can see the H.264 encoding is being performed at the host end (using ‘nvidia-smi encodersessions’ and via the RDAnalyzer tool - ‘Hardware Encode = Enabled’). Visual quality is ‘High’ and the Max Frames p/s value is 60. Video performance via VLC or MPC is good, with 4K playing at around 25fps (although 4K YouTube is poor - I guess down to the encoding being done is a different protocol?). Thinwire is the display mode, with Video codec usage enabled for the entire screen. Transport protocol is TCP
Mouse performance is poor - a perennial issue for me and Citrix using 4K panels, with a lag behind cursor position. However, performance overall is good. Then i turn on the lossless switch to activate lossless and mouse performance is great - really fluid and smooth. The RDAnalyzer reports ‘Video codec not in use’ and the H.264 codec stream no longer applies on the host end GPU. RDAnalyzer also shows Adaptive Display as ‘True’. Of course, now video performance is poor using both local video and YouTube, etc.
So does anyone know why I can’t get the performance of a GPU enabled VM along with the general feel and mouse performance of the non-GPU enabled VM? Is this a protocol limitation I am missing or unaware of?
Well, my experience is different so I think there is something wrong in your Citrix policy config.
With H264 is should be smooth but image quality is worse compared to lossless (as expected). I don’t see any reason why mouse performance should be different for these protocols as long as you don’t use server side rendered cursor applications. If you do so Thinwire should always be worse as we reduce latency with NVENC so H264 is the better option for mouse performance.
Thanks Simon,
It’s puzzling to me, because I have tried many policy settings with varying results, however the mouse issue remains. I’ve also compared Vmware Blast and with H264 encoding there I see performant mouse operation so I know it can be done with this protocol.
Would you be able to share a policy config with me so I can compare and at least go from a known good policy baseline
Neal
You can test with use video codec for compression -> Entire screen, Visual Quality=high and 60fps.
Did you also test with a single screen or FullHD instead of 4k? I assume the issue is more related to the linux endpoint and 2x4k.
That’s a good point. The NUC endpoints run Core i3 and i5 CPU with an integrated HD 620 GPU. Not hugely powerful but enough to push 2 x 4K. Yes with lower resolution performance is better. However, and this is the key difference for me - running the same workload on the same VM using the same client, using Citrix instead of VMware Blast, shows a large increase in client CPU usage (30% vs 90%). I’ve raised this point elsewhere with no real firm reasoning as to why and I feel that given the client side decoding is H264 in both the CPU usage should be similar. Or is it the case that HDX is less efficient than Blast’s implemention of client decode?