Hello.
The full profiles k180/k280 occupy all GPU K1/K2 except reserved 320MB vram and vgpu/libnvidia-vgpu process in Dom0. The profile does not share vGPU computing power and Dom0 scheduler is not needed.
Why is there enabled "frame_rate_limiter" by default for that profiles ?
Does the disabling remove virtualization overhead (Dom0 scheduler) from GPU ?
Why is there disabled "cuda_enabled" by default for that profiles ?
I suppose that Dom0 scheduler does not able to handle sharing (time slicing) for CUDA processes but there is only DomU concurrency that should be possible.
I know that this parameter is intentionally protected by digital sign in new drivers (>6/2015 in vgpuConfig.xml) therefore I am using older driver.
Thanks for technical answer, M.C>
Hi MC,
Why do you want the FRL off? In a VDI environment bandwidth usage is of concern and I’d be curious to know when one would want to use more to go above 60fps - imo this is one that I would leave off by default .
I’m not sure about the cuda - I agree that one makes less sense - but in K2/K1 CUDA is not enabled so may be a hangover. Long term these will be things I expect to see change as GPU architecture makes CUDA on shared GPUs more sensible. One good reason could be that whilst a full GPU is not shared under XenDesktop - it is in XenApp and support is only experimental. Turnign it on by default under XenApp would turn on an unsupported feature I think.
Just my best guesses!
Best wishes,
Rachel
Scheduling is in the GPU silicon not DomU nor Dom0.
The memory reservation is for mapping System Memory into the GPU memory. With PCI passthrough this is handled slightly differently in the OS and is not required at the hypervisor level.
As above, scheduling is not in the hypervisor it’s in the GPU silicon.
Changes in the Maxwell architecture allow us to deliver this in vGPU whilst having to retain this limitation in Kepler.
We have to differentiate between what may work, and what is fully QA’d and supported. This is hte reason that drivers are signed and why modifications to such settings place environments into an unsupported state, not just unsupported by Nvidia, but also by the hypervisor vendors.