Xenapp Performance

Current Environment;
Dell R730, E5-2695 v3 2.3ghz 14 cores x 2, 768GB Memory, 2 x 10GB NICS
XenServer 7.0, Windows 2012 R2 and XenApp 7.13, PVS 7.13
2 x Nvidia Tesla m10 with GPU Pass-Through and Grid 5.1
Compellent SAN with SSD for XenApp Servers

We are currently experimenting with the Tesla M10’s. We run a mix of end-points (These are mostly old Dell PC’s) some of which have dedicated GPU’s (and some which don’t). We have been finding in recent months that our end-user experience has been degrading, after logging a call with Citrix they believe that this happened since we installed Receiver 4.4 as this enables hardware acceleration by default. This has now been disabled but we are still experiencing problems.

We are an architectural firm and primarily run a mix of autodesk applications, i.e. AutoCAD and Revit. Most of our user gripes are with AutoCAD performance, but we do also see issues with Revit performance.

Running the Fishbowl Test in Chrome we can see that it is off-loading to the GPU correclty, but it also appears to be utilising my end-point CPU. All Citrix policies for the server, user and end-point have been configured (as best we can tell) to prevent off-loading to the local client.

Most of the literature we are reading talks about using Grid with XenDesktop for the best experience, but unfortunately this isn’t currently an option for us, therefore we are looking for advice and guidance on how best we can overcome this with a XenApp environment? Are we unrealistic in trying to prevent any kind of end-point offloading?


Without knowing what kind of Codec and Citrix policies you’re running it is pretty hard to help. But if you run H264 as example and you disable hardware decoding on the endpoint this is the exactly the wrong way to prevent CPU load.



The only end-point policy we currently set is hardware acceleration - disabled.

The below are the citrix policies we explicity set, all others will be set as the default behaviour.

Client floppy drives - Prohibited
Client network drives - Prohibited
Target minimum frame rate - 20 fps
Client TWAIN device redirection - Prohibited
Target frame rate - 30 fps
Visual quality - Always Lossless
Extra color compression - Disabled
Framehawk display channel - Disabled
Use hardware encoding for video codec - Enabled
Session idle timer - Enabled
Session idle timer interval - 360 minutes
Disconnected session timer - Enabled
Disconnected session timer interval - 360 minutes
Audio quality - Medium - optimized for speech
Client microphone redirection - Allowed
Audio Plug N Play - Allowed
Flash acceleration - Disabled
Flash default behavior - Disable Flash acceleration
Audio over UDP real-time transport - Enabled
Use GPU for optimizing Windows Media multimedia redirection over WAN - Allowed
Client printer names - Standard printer names
Printer properties retention - Retained in user profile only
Direct connections to print servers - Enabled
Single Sign-On - Disabled
Desktop wallpaper - Prohibited
Menu animation - Prohibited
View window contents while dragging - Prohibited
Client audio redirection - Allowed
Client printer redirection - Prohibited


Display Memory Limit - 131072KB
Maximum allowed color depth - 16 bits per pixel
Queuing and Tossing - Enabled


Is this a LAN only scenario? Why do you use Always Lossless as default? I would start with H264 and Visual Quality High.
I would also remove the minimum FPS.
If you really want to have almost no endpoint cpu utilization you can only run Thinwire+ but this is not the best option for CAD apps so you need to decide what is more important: User experience or endpoint utilization.



This is primarily LAN, although we do have WAN sites as well. When we first deployed with Nvidia Grid K1’s we took advice from a consultancy and these are the optimal settings we found with their guidance.

We have found that since upgrading to AutoCAD 2017 (whether this is due to the increase hardware required of CAD 17 or that our endpoints are being utilised more) our performance has degraded.

We have a 45 day trial of the Tesla M10 from Dell but so far are unimpressed with the performance gains of switching out the cards.

Are there any baseline Citrix policies we should be looking to start with?

We have installed ‘NVIDIA-GRID-XenServer-7.0-384.99-385.90’ and from this pack install the RPM on our XenServer host, and the license server, and 2012 R2 driver on our Windows platform.

We are finding that when we change our policy from any type of lossless to visual quality (medium or higher) we get glitching with anything graphical. We have confirmed the GPU is passing through correctly and is visible.

Are we being unrealistic in our expectations? Should we be looking at XenDesktop \ Desktop OS to fully utilise the Grid Hardware? In your experience, is the performance between XenApp \ XenDesktop better?