Is it possible to use Tesla T4 NVENC engine inside of the VM? Host is Windows Server 2016 and VM is Windows Server 2019 with Hyper-V.
There are a number of inconsistencies:
1.) vGPU is supposed to be required, but not available for HyperV
2.) DDE is supported for Windows Server OS, but license still needed, even though vGPU not possible?
Same for Bare Metal, supported but license still needed. See here:
3.) When trying to install Tesla Drivers downloaded from here:
And getting to this installer:
The installer reports:
“Unsupported Windows version”
when attempting to install in to the Virtual Machine with Windows Server 2019.
5.) Microsoft gives instructions:
But also says, that some GPU makers will not allow some devices to to work.
Should it be possible to access Tesla T4 NVENC engine from within the VM as described? How?
I can only confirm that NVENC works when the drivers are installed on the host, but it is not clear if any license is required. The Control Panel looks like this:
For sure NVENC works works in a VM. You need the vGPU driver to make the VM work properly. DC driver only supports TCC mode on Windows.
Therefore there is no inconsistency but different features.
DC driver = TCC only
vGPU driver = full support (DirectX, OpenGL) and therefore requires a vGPU license (vWS)
You need the vGPU driver to make the VM work properly.
Except that on this page:
Hyper-V is not listed as supported for vGPU. But we do not need GPU virtualization anyway. We only need a single dedicated GPU within one VM. Microsofts Discrete Device Assignment (DDA) is not the way to do it?
“DC driver = TCC only”
What is “DC Driver”?
I have found the that “TCC Only” is “Tesla Compute Cluster Only”:
Which states: “Current drivers require a GRID license to enable WDDM on Tesla devices.”
Does NVENC work in TCC Mode or does NVENC count as a part of “Graphics”?
If the software specifically supports NVENC in TCC mode, then it could work?
How to enable Windows Server VM to run the TCC mode? Simply follow through with Microsoft DDA?
DDA is the way to go with Hyper-V. DDA is supported with the vGPU driver.
I never used NVENC in TCC mode (with DC driver) but I would assume that this should work. If it works you could use it without a vGPU license. If it doesn’t work you would need to use the vGPU driver (and a vWS license).
BTW: It doesn’t matter if Hyper-V is not supporting vGPU. You would also need the license to use DDA (Passthrough) or even baremetal. The driver makes the difference not the virtualization technology…
Thanks a lot for this info. This does add a few stones in to the mosaic. Still missing few pieces though:
DC driver = datacenter driver :)
1.) Ok, Then the DC drivers are downloadable from here:
Where it says: “Data Center/Tesla” in the dropdown box.
And this is also what I have installed for now in to the host, just to check that Tesla is present and working.
2.) In our case NVENC is to be used by a Windows Service and it should be running also while there is no user logged in or connected to the Windows VM via Remote Desktop or other desktop access software.
There a number of articles on internet (10+ years old), which say that NVidia TCC solves this problem, but there is nothing about the NVidia vGPU driver.
Additionally, in case of vGPU:
Which license would we need for this? You mention vAps and vWS, but vCS looks closer to me?
Which license covers NVENC and allows capability described above?
BTW: It doesn’t matter if Hyper-V is not supporting vGPU.
3.) If I go here:
and try to register for a trial version of vGPU software, it is not possible to specify Hyper-V. And the Hypervisor field is marked as “required”. How to get the driver?
Our machine is certified for vGPU (ProLiant DL380 Gen10) although we will only be using one card within one VM with it:
After pretending to use the VMware Sphere, I got to see this:
And the questions would be:
a.) Is the Nvidia Microsoft Hyper-V 2016 Software to be installed in to the Host or in to the Guest?
b.) Is there any support from Nvidia Installer to Configure Microsoft DDA?