XenApp on Bare Metal with GRID

I have been referred a client that has installed XenApp on bare metal and has a GRID card in the server. Their XenApp sessions are not seeing the GPU. I have relayed to them that a hypervisor needs to be underneath Windows/XenApp in order to access the GPU on the GRID cards. I know that the intended design of GRID was to have a hypervisor in place and I believe that their configuration will not work. They are intent on implementing it in its current configuration. Could you please give me some technical reasons why it will not work like that?



It should work on bare metal with a Windows install using RemoteFX and that’s it. To leverage true GPU passthrough and vGPU involves integration with running XenApp as a VM on a hypervisor, as you said, and to get vGPU it has to be either via XenServer or VMware. That is all clearly indicated in the documentation.

Thanks Tobias. I appreciate your response. So you are saying that Windows/XenApp installed on bare metal can access the GRID GPUs via RDP/RemoteFX, correct? What about having the Citrix VDA for ServerOS installed? Will that allow access to the GRID GPU? I have used Citrix’s Remote PC technology with the HDX 3D Pro VDA to access a regular workstation GPU over ICA. So would the ServerOS HDX 3D Pro VDA get the similar access to a GPU installed on the bare metal server?

I am trying to get this client some specific technical reasons why their configuration will not work. As you note, all of the documentation says to configure this with XenServer or vSphere, but I have not been able to locate any documentation that spells out that their configuration will not work. Do you have any further thoughts?

Thanks Tobias!


We have not tried anything with a GRID on bare metal so I cannot speak from any direct experience with that. However, any RDP session should just be able to tap into a RemoteFX session on the server as you’d think that the XenApp service just looks like any standard Windows server from that viewpoint provided that you are not just limited to an ICA conenction. You won’t be able to leverage HDX or any of the Citrix-specific functionality available via XenDEsktop/XenApp/XenServer. I would look for a standard Microsoft server manual on how GPUs are handled under WIndows and use that to justify what the limitations are compared to delivery via XenServer or VMware ESX.

Thanks Tobias! If I find some additional information regarding this I will post it here.


Hi Tobias,

I spoke with an NVIDIA solutions architect and below is a summary of our conversation.

It appears that it is possible to make this configuration work but there are some caveats.

One of the main concerns is how to effectively allocate the 4 physical GPUs that are in the system (It has two GRID K2 cards). Windows itself does not have a built-in mechanism for allocating/balancing the GRID GPUs across XenApp sessions. Most apps are not multi-GPU aware so if Windows sees all 4 GPUs, most apps will not be able to take advantage of them. What may happen is one GPU is used by all XenApp sessions and the other three GPUs are unused.

This configuration may also require some custom configuration to get Windows to properly boot and recognize the GRID GPUs versus the onboard graphics card.

This configuration would be able to send the GPU remote graphics via the HDX 3D Pro VDA to the endpoint, just as it would in a configuration with a hypervisor layer included.


Thanks for the update. I hope to publish a paper at some point in the near future on combining multiple GRID engines into a single XenApp instance. Stay tuned… I am still working with Citrix on a few details
plus it may not end up something officially supported, but it does work and the load appears to get pretty evenly distributed between the GPUs. It very likely could be extended to cover even more than two GPU engines. Running via XenApp certainly gives a lot more flexibility in the direction it looks like you would like to go.

I’m looking forward to reading your article. Sounds like it will be interesting.



Hi guys, I’m looking at the same thing, I want to run bare metal W2K8 R2 with XenApp 6.5. However the GRID K2 card is only visible via something like GPU-z, no 3D applications run (DirectX errors) and GPU-z never reports any load on the GPU when running browser based 3D tests. In device manager the K2 shows an exclamation mark and says there is insufficient resources, I think its because the onboard martrox video card in the WS460c is acting as the only video card. What do I need to do to allow the GRID K2 card to be available to user sessions, and will it leverage both GPUs? I’ve seen the links regarding enabling directX and openGL GPU sharing and have added the OpenGL hotfix. What am I missing? Is baremetal possible with the K2 GRID and W2K8 R2?


Make sure you have your servers configured as per this article


And disable the onboard video card, or at the very least stop it being the primary adapter (disabling it is easiest though).

Still waiting for Citrix to OK the publication, which contains at the moment some unsupported configurations. I have run the test configuration for over a month with no issues whatsoever, which is encouraging.

Thanks Jason, that’s what I was missing, I had to set the WS460 into “USER” mode in BIOS to bypass the onboard video. I am now testing and it works well, however it seems to be using only one GPU at a time. I have the OpenGL hotfix installed as well as the EnableWPFhook set to 1. What am i missing so that I see load in GPU-z distributed against two GPUs (I’ve initiated tests from multiple sessions at once to try to spread the load).

It sounds like there is a custom tweak that is possible to distribute the load, as mentioned above - what is the trick!?

Thanks for the response :)

hi all!

is there something news on bare metal nvidia grid k2 with hp ws460cg8 and xenapp 7.6? i dont want use a hypervisor! did someone configure this scenario in real?


Seems like that should work as long as you stick with GPU passthrough. I’d be interested if anyone has a real-word such configuration running, as well.

The post above yours explains how to configure the host to allow it to work.

spreading hte load across multiple GPU’s is handled by the OS / XenApp so it’s down to how the OpenGL sharing feature works where the load is placed. Question for Citrix on how to manage that.