I have K2 in pass through mode on xenserver 6.2 SP1 and am trying to determine how my applications are stressing the GPU. I added the nvidia WMI pack, but windows perf monitor still shows 0 for all the GPU values I care about, such as memory and % GPU.
I forgot to add that nvidia-smi on the windows Vm does not show much data either, because WDDM is prohibitnig it from getting GPU memory usage. GPU usage shows 0%, which I doubt, when i have 12 users logged in with a NASA worldwind client.
Can you confirm that you’re not using the driver for vGPU.
If you are, can you switch to the passthrough driver from here.
Yes, I am using the vGPU driver 332.83. I need vGPU to slice the GPU up with Xenapp to distribute the application to as many users as possible.
William, I am unclear on how you have this configured. If you are using pass-thru then use the normal NVIDIA driver for your OS (Jason provided a link above), this pins the GPU to that guest running XenApp and gives it the full performance of that GPU. Or are you using vGPU on XenServer to share out the GPUs? To be clear, vGPU is not the same as pass-thru, and I don’t believe is supported with XenApp. Regardless, double sharing (vGPU shares the GPU first, then XenApp shares it again) does not make a ton of sense from a performance stand point. If you could provide more detail we can dig into this further for you.
OK, I removed the vGPU driver and installed 332.84 version for windows 2008R2. (the 340.84 version would not extract). I stall cannot see any GPU utilization with nvidia-smi.exe or windows performance monitor.
Are you using RDP to connect to the guest?
No, I am useing CITRIX ICA via xenapp. Nvidia-smi shows the PID for the applications I am using, but memory usage as insuffieicnt permissions and a value of NA. Is that because of the WDDM driver in use? I am testing both K1 and K2 cards and cannot make a determination of which I should use, since I dont have any data to base it on. My NASA worldwind looks good for 20 users, but I need data to prove scalability and choice of video cards. thanks, Willie Harrington
You can certainly measure GPU information from within an ICA session.
You must measure it at the server console. Since this is a VM, and since XenCenter doesn’t allow you to connect a console session to a VM with a GPU in passthrough you either need to:
I’ve sucessfully tested the following methods for monitoring the server GPU resources.
a. Using Perfmon in an ICA session
b. Use VNC to make a console connection
c. Use RDP to make a console / admin connection (note that you can monitor the metrics but can’t run GPU applications through RDP as Windows disables hardware acceleration for RDP sessions)
d. Connecting from a remote machine in the domain.
This was tested on a Server 2012R2 machine running XenApp 7.5
Remember that you’re measuring the resources for the whole server, not for the session as there are no session metrics reported.
To add to my note above, I think that the root cause of your issue is the original use of the vGPU driver.
If you remove this and the WMI pack completely, then start again with the pass-through driver you should see the metrics start reporting.
This links to a quick video of it in action, reporting within the ICA session
I’ve an extended version which I’ll post to YouTube once it’s cleaned up, so you can see that it is possible to monitor within an ICA session, if you use the right driver.
When I removed the WMI pack, I dont see any options under performance monitor for Nvidia any longer. I did get GPU-Z to give me some data, attached. What is the max I should use for memory on a K2 card in pass through? I dont have URL to post the pic, but I show a max of 31% for GPU load and 1047 MD dedicated memoyr and 431 usage (dynamic) for 16 users, each playing a mpg4 video, and NASA worldwind.
Did you remove the drivers as well, reboot then install the 340 drivers?
These are the drivers for Server 2008R2 and Server 2012R2, install these.
Not sure I understand this question. You don’t allocate the GPU memory in passthrough it’s fixed at 4GB per GPU (so 8 GB per Card).
In XenApp the sharing is handled by XenApp and the Server OS, you have no control over it and to determine capacity you would need to load test as with any XenApp workload as load doesn’t increase utilisation linearly.
OK, after doing it in the correct order, I get the NVIDIA data in performance manager. Thanks much.
Now, how to interpret what I am seeing. With 18 users, I am gettng 61% GPU utilization and 1388 MB memory allocation (I assume out of 4 GB) and 584 dynamic memory. So am I nearing the max I can expect from this one GPU? I also notice that CPU 0 and 2 are maxed at 100%, but I guess I will have to take that up with CITRIX and Microsoft forums.
Based on the numbers you could probably get an additional 8-10 users on pushing you into 90% utilisation territory.
The likely reason for the CPU’s being maxed at 100% is ctxgfx.exe. This is the HDX encoder process which is busy rendering all 18 of those sessions a 30fps (or whatever it can achieve based on resolution, CPU speed, available resource etc).
The video below shows just this behaviour. Although in the video I’m using YouTube in Chrome, the effect on the CPU is the same.
How did you get such good video quality with Xenapp? without lossless HDX? My testing showed very fuzzy videos, with my video player or Windows media player with .mpg4 files. I had plenty of CPU resources remaining. I am using Open Gl exclusively.
I retested on XenDesktop 7.1 and the HDX made all the difference in the world. Pretty near native video player performance.