Hello.
I’m new to this vGPU world and I’m setting a server 1x Tesla T4 . I had to install windows server 2019 with Hyper-V and configured it with Discrete Device Assignment (DDA). It’s working but, now we would like to “split” the GPU into multiple vGPU to distribute the GPU across multiple VMs running on the host.
We have read documention about on nvidia website but still don’t get it.
Tried to deploy a licence server also(with qDWS), but when deploying the licence on the host, still getting only 1 gpu?
Can someone light us and help us to have a clear view about what is possible in our case .Thanks and regards
Greg