Competition is good for everyone! For consumers it gives them choice and bargaining power. For vendors, it helps drive innovation to stay ahead in the market and for engineers / architects, well it just gives them more tech to play with, which is always a good thing :-)
By having the GPU sitting on the same die as the CPU, Intel definately has a great solution there, however it seems they sacrifice Core count to do this (I can’t find one with more than 4 Cores?!) which massively impacts server density, which is obviously not great as it leads to additional costs. You’re limited to 2GB frame buffer with this as well. Maybe that will change as the technology evolves. Also, don’t forget AMD have moved into this arena now as well, they also have a different approach to Nvidia and called it MxGPU (Multi User GPU). They use SR-IOV to do it, rather than the GRID software alternative. However, unless things have recently changed, one of the limitations of SR-IOV is that a VM is bound to that piece of hardware and cannot be migrated to another host in the event of a failure. Yes, currently this migration technology (vMotion / XenMotion) is not available for Nvidia either, however by using a software layer, the Hypervisor vendors can at least have the oppertunity to develop this technology, whereas with hardware, no one has done this to date. Another advantage of using a software layer, is that it’s easier to add new features and enhancements, whereas with hardare, obviously not so much.
I think it’s safe to say that Nvidia did (let’s say) get "overly ambitious" with their initial pricing model for GRID 2.0, however they did recognise this which is why it was quickly refined to what it is now, which is much more pleasing.
Depending on how the solution is designed, it can be pretty cost effective. For the Task / Knowledge workers, you can be looking at over 100 users per physical server with the right CPU / Memory / GPU combination. Add to that the fact they are no longer chained to their desks, no longer need to have PCs running and maintained, Air Conditioning costs because of all the PCs etc etc, these things all add up and they have a value of some sort.
By using a GPU in the servers, the CPUs don’t have to work so hard, so you get more users per server, up to a point where you actually need less physical servers to support the same amount of users. The total cost of the solution can be reduced as you don’t need to purchase, run, cool, maintain, support and license as much hardware.
If you compare a K1 to an M10 with XenDesktop or Horizon, the K1 would support a Max of 32 users per Card. With the same use case, the M10 will support 64, and give a better experience due to better architecture and feature set, plus what I mentioned above about SUMs for support, updates, maintanence feature enhancements etc which you don’t get with the older GPUs. Also, don’t forget that’s Concurrent User, not Total Users. Adding another K1 and M10 into the same server, you’re now looking at 64 users (2x K1) compared to 128 (2x M10) if you don’t run into CPU contention issues that is. That’s double the density, so theroetically, you’d need 50% less physical servers to support the same amount of users. That’s less hardware to rack, install / configure, power, cool, maintain, support and less Hypervisors to license, if you run Citrix on-top of vSphere, that’s quite a saving! …
Do you mind me asking, how may users is your client looking to support on this platform?
Regards
Ben