Tesla K40/ K6000 performance discrepancy between Linux / Windows

I’ve got a Xeon workstation dual booting Debian 7 (nvidia 331.38 long lived driver) and Windows 7. It’s got a K6000 and a K40, both of which I’m using in GPU compute mode for dualSPHysics. This machine was upgraded from a dual 6000 setup. I am noticing large differences in performance between Windows and Linux. A job that takes ~6 hours on Windows is taking ~11-12 on Linux. Also, the K6000 on Linux is performing approximately the same as a K5000 we have in another machine running Windows.

Any ideas on what could cause such a discrepancy? Are there known issues with this driver? Could this be a clock speed problem? I can provide more details if that would help.

I assume you are using the TCC driver in Windows?

Not that many people have those new top-end GPUs, but in general my experience has been that in Windows with the TCC driver the performance is about the same as it is in linux. When using the WDDM driver for the GTX line in Windows there is a bit more latency than there would be in linux.

That is a really large performance difference, so there must some issue with the linux driver for the K6000 or some other configuration issue.

In Windows, we use the ODE driver. In the nvidia manager, the Tesla shows it’s in TCC mode, and the K6000 is WDDM.

With the 6000s we had before, we also noticed that the cards performed better in Linux.

We have tried multiple Linux driver versions, including 331.38 and 319.82. These have been installed with the nvidia installer. We are going to try using the Debian experimental repository to see if that fixes some configuration error.

I realize the cards are fairly new, but it seems like someone would have notices a 50% performance loss.

The only thing I can think of is that the card(s) is/are running in PCI-E 2.0 in Linux and PCI-E 3.0 in Windows (assuming you have a large number of memory transfers occurring in your code). You can check that via the NVIDIA Control Panel/NVIDIA Settings. This of course assumes you have a PCI-E 3.0 compatible motherboard.

If it’s running in PCI-E 2.0 in Linux, you can enable PCI-E 3.0 speeds if your motherboard supports it by following the steps here:


Thanks for the tip, but I checked and the cards are both running in PCI-E 3. Also, we are doing very few transfers over that bus. I will try the nVidia Linux forum.

I have an HP server running Debian 7 with a K5000 & a K40. I can’t install Windows on the server but I might be able to test the Linux benchmark on the machine.

PS. The server doesn’t support PCI-E 3.0.