I have ESXi 5.5 installed and I’d like to configure pass through to the VMs.
I’ve installed the .vib drivers
I’ve configured the pass through in the vCenter webui
I’ve attached the PCI device to the VMs.
The VM can see the card and query the card, but display falls back to native Vmware driver.
Are there any more steps? The only deployment instructions I see from nVidia all refer to Horizon View, but I just want to attach these to stand alone unbrokered VMs. Is it supported?
Thanks for your reply. We installed the .vib drivers because some testers want to switch back and forth between vSGA, vDGA and vGPU. Should I remove them for vDGA to work?
Also, just to confirm, the Horizon View docs should work if I’m not using Horizon View?
I mean that I am not installing the VMWare Horizon View Agent on the VM and I am not connecting them to a Horizon View Server. I simply have a VM running on ESXi and am hoping to use vDGA passthrough.
For now we’re connecting via the console, and then we’ll move on to other means. Is this supported? All the documentation about using vDGA and Passthrough seem to explicitly talk about using VMWare Horizon View (which we are not).
You cannot use the console once a GPU is assigned to a VM. Currently the vSphere console is not compatible with GPU enabled VM’s and has issues with mouse interaction and input via the console. We don’t know VMware’s plans to update the console.
As RDP will not allow accelerated connections, in particular DirectX is completely disabled. It is recommended that you install another remoting solution, such as Horizon VIEW Direct Connection.
do I need to uninstall the Vib from the esxi 5.5, because i just need to enable the Gpu passthrough , so the VM can use directly, what kind of the software do i need to view the VM.
This is frustrating: I think what they are saying is this: how can we use vDGA or shared GPU on Nvidia Grid K1 WITHOUT HAVING TO USE HORIZON. Here’s my take: I can put the K1 card-- haven’t tried yet, but am shortly-- into a Windows Server 2012 R2 Hyper V host and within MINUTES start sharing the GPUs with virtual machines. I have already done this using an Nvidia Titan Z, sharing it’s 2 GPUS with 24 Hyper-V hosts and like I said, the installation took only MINUTES, not HOURS or DAYs as it does with VMWare Horizon… It’s patently ridiculous that VMWare creates such a HUGE stumbling block, or sever requirement (over 4 servers and a full Active Directory deployment) just to share a flipping GPU with virtual machines. Talk about a pain in the butt, especially if all I am trying to do is demonstrate with live hardware to a client who is probably going to say screw VMWare / Nvidia-- we just want to be able to play 720p movies for training on our vms and that is it. On a LAN, not WAN. RemoteFX is so much easier to setup than Horizon View. Horizon view stinks if you have less than 50 virtual machines and simply want to share a very basic GPU experience. So, in short: with VMWARE it takes Hours, even DAYS to do it and with Windows Sever 2012 just MINUTES. Granted, the WAN peformance is horrid with RemoteFX on standard bandwidth, but not everyone expects that. I have spent weeks trying to find resources on how to share a GPU on VMware ESXI 6 without having to deploy Nvidia Grid / Horizon and it comes up short.
Is there now way to be able to use Grid K1 profiles right off of a Vsphere Sever with Esxi 6.0 Hypervisor? Do we HAVE to push VMWare’s junk Horizon client and use PCOIP even for just a simple 32 station Local Area Network? If so, then state it: say this… YOU HAVE TO SPEND thousands of dollars of VMWare View Horizon in order to use the only way to get shared GPU on VMWARE using PCOIP… it’s THE ONLY WAY and YOU HAVE TO SPEND THOUSANDS OF DOLLARS in SERVERS and in VMWARE HORIZON licenes, plus you have to pay for a Grid K1 or K2 Card… so if you want to do simple Multimedia on a LAN, you are better off with Windows Server 2012 and an approved GPU that can be shared on a LAN using RemoteFX… it’s blows the doors off of the pain in there costs and installations of Horizon by a GREAT MARGIN.
I hoped I am wrong, Nvidia: all I want to do is sell Nvidia Grid to clients who may have 50 desktop computers they want to convert to VDI… they will have remote employees, but simple RDP is fine, or even No Machine for mildly acceptable Multimedia software codec… but c’mon-- Why can’t VMWare / NVidia get smart and enable a person like me-- a solutions architect, VDI optimization expert-- to take a basice Vsphere Essentials installation, pop in an Nvidia Grid K1 card, or two of them, or 1 or two Nvidia K2 cards and then use them on a LAN without having to create a flipping DATACENTER of Severs they don’t want to even use let alone have! Not everyone wants to use Active Directory. It’s overkill over small Lans who have simple needs, but may want scale up… yes, I can install close circuit TV broadcast-- that’s not the point. I have clients that want to build basic AB/roll video editing for YouTube videos and share them with other hosts on the network and not upload to YouTube, then share-- and they saw your demo-- they called me… “We want to use Nvidia Grid!” I’m like, wow it hauls butt-- but then, after finding out you have to run Active Directory with a Horizon View Connection Server gobbling up 10GB RAM and forcing you to route packets through a software gateway is retarded, especially for LAN. Direct connect is probably okay-- but I still have to deploy a full Horizon deployment… so anyway, I’ve taken a Titan Z on Sever 2012R2 and literally produced 30 virtual machines, 24 of which could pay full screen HD Video simultaneously over a LAN and was able to accomplish that within a day.
The horizon deployment. Days. Not good for a demo, not at all.
Grid vGPU is a feature of vSphere not Horizon, you dont need Horizon or AD or anything else to setup a vGPU profile on a vm, not sure where you got that Horizon was required for a vGPU profile.
Please refer to https://www.vmware.com/products/vsphere/compare to understand Hypervisor capabilities by edition. If you refer to the matrix, you will see it is a feature of Enterprise Plus in vSphere, so your plan to use Essentials would not be feasible.
You can use whatever protocol you want to remote the screen, but the remoting protocol needs to be able to do 3D - end of story.
I would also suggest possibly doing a little more research on the network topology for Horizon as well. You are not forced to route through a software gateway, it is an option (usually not used for internal LAN traffic). Below is a link that shows some of the network paths available:
vGPU in vSphere can be used by any solution that will run on top of vSphere, so if you want to use a VDI solution other than Horizon then you can, as long as that remoting solution supports it. We have many customers using alternative solutions so you are not forced to use Horizon if it is not required.
If you wish to use RemoteFX, then you will need to comply with the requirements set down by Microsoft for hardware acceleration. Those are something that neither Nvidia or VMware can control.
As an aside, in my experience, it takes a matter of hours to build a full Horizon deployment from scratch, less if the AD, DNS, SQL etc are already in place. It’s just a matter of experience and knowledge, not a limitation in the product set.
There is a complete deployment guide written jointly by VMware and Nvidia based on deploying K1/2 with vSphere 6 and this takes you through every step required for a complete deployment. Following this to the letter, most people can build a PoC in under a day.
This is currently being updated to GRID 2.0 with the latest updates from VMware.
Windows Server uses a different approach to Nvidia vGPU and doesn’t give the VM’s direct access to the GPU. It uses an API interception mechanism which translates graphics calls from VM to host. This may be easier for you to implement, but it provides less control over resources, requires system RAM for the graphics, delivers lower performance, and lower levels of API support (specifically it lacks full OpenGL support) when compared to a vGPU deployment on the same host.
Hello,
in the newest version of ESXi you cannot passthrough the GRID cores to vms! Means vDGA does not work anymore. Is this a nVidia or a VMWare problem? In 5.5 everything was working fine. We have opend a case at vmware, but the engineers does not find a solution.
Installed ESXI with 2 VMs. Passthrough K6000 and both the VMs are running perfectly 4 output each (Windows 7 x64)
Following are the issues:
Using Nvidia driver ver 362.56
Nvidia Control Panel missing mosaic feature - Got it sorted by using mosaic utility command line
Nvidia Control Panel no option to configure Syncboard. If i check the system information, i can see the sync board but i cannot configure the sync
Anyone with this issue?. I think for this type of setup i dont need any VIB files on the esxi host. correct me if I am wrong
I was trying to do an evaluation with simple pass through graphics and struggled for several days. I wanted to build a test bed from a HP z820 workstation. Dual 8 core Xeon’s with 128 GB of RAM and and SSD raid array. ESXI 6.0 and three Quadro 4000 graphics cards. All I wanted was a way to use a graphically accelerated virtual machines for some Solidworks testing, and in the future to provide remote access for Engineers over the weekend and when they are on the road. The server, graphics cards, loading of ESXI was all very straight forward. Not wanting to spend cash on licensing for an evaluation project was a struggle but I found the HP remote graphics software sender/receiver can accomplish accelerated graphics. Every HP Z workstation comes with a license for it and they have a 60 day eval license, and to purchase a license its not expensive. This was a simple an easy to get going remote client with little to no cost. There are a few quirks. #1 do not open the console from vsphere client this disables the 3d card as the graphics display card. If you do, its okay, launch the graphics receiver, login, go into device manger and disable the VMware SVGA 3d card, this will switch the VM back to the pass through card in my case a quadro 4000. This is working well for what I needed it for, a way to run a few VM’s with CAD graphics. It may be simple for those that setup and configure VM’s all the time but all the different pdf’s on vDGA and VGPU made this process confusing.