GPUDirect in PCI-Passthrough configuration

Hello all,

We have a setup in one Workstation with two Virtual-Machines. Each machine has a GPU, and they form a sort of a processing-visualization pipeline: the results processed in the linux VM are then visualized in Windows.

The communication between the two VMs happens through the Hypervisor (virtual-network). This involves copying the information from one GPU to the linux-vm, and then to the windows-VM and then to the other GPU. This is CPU intensive and we would like to explore the GPU Direct technology for it.

The question is:

  • Would GPUDirect work in this PCI-Passthrough setup? Kind of only having the network interfaces and a cable doing a Loop-back to the same machine?

We this we want to offload some of the load on the CPU/Hypervisor.

Thanks.

1 Like

I am curious about why you need two systems for one Task. Is it possible to migrate the application on Linux to Windows since we have CUDA, TensorRT supported on windowns?

Unfortunately, that part of the design is due to legacy code. In a future a restructuring would be possible, for now we have to work with this setup.

Since this is a legacy implementation, what is your legacy HW set up for these two OS applications?

Well, the HW itself can be upgraded. We are using now a RTX4000 in Linux, and a GTX 1070 in Windows. But we are more interested in the general question of whether or not a GPUDirect link could be done between two GPUs in the same system, to avoid overloading the CPU with copying of the data.

It is only possible when using the same operating systems for two GPUs inside one system.