CUDA Applications from 2 VMs

I am trying to run an experiment with 2 VMs running CUDA applivations on the same GPU. From the licensing guide I found out that I need QUADRO vDWS to run CUDA applications, but it also says that for Maxwell applications it only supports 8GB 1:1 profile. What exactly does this mean? Will I be able to set up 2VMs with CUDA on Quadro M4000?

Thanks for your reply!

First of all the guide is mentioning Tesla GPUs which are vGPU enabled. You won’t be able to run vGPU on a Quadro M4000!
Support for CUDA and vGPU with multiple VMs sharing a GPU was introduced with Pascal generation.



What is I switch to P4000? Can it work that way or Tesla card is required to run my experiment?


Once again: vGPU = Tesla GPUs. You can run Passthrough with P4000 but no vGPU!

Simon sorry, I’ve got a question about quadro cards and passtrought:
if I install a quadro card (ex a p4000 or a RTX ) into a server (DELL R740 in my case) and I passtrought the card to the vm, when I’m going to connect to thevm using horizon view (blast or pcoip) my remote session is 3d accelerated? or can I only use the CUDA/rt cores of the card? I know that this isn’t a traditional VGPU configuration :)
but what I need to know if the remote protocol is able to use the GPU to accelerate the visualization
thanks in advice!

Sure it will work with GPU acceleration. But there might be other issues present. Quadro boards have a physical display head so it depends on the VDI solution to properly present a virtual display and especially allowing proper resizing of a session for example. And there won’t be support for this configuration.

Best regards