Is there anyone know about the performance at linux and windows?

I want know is there difference between linux and windows with same code?

It used to have been ~10% faster on linux in general.
But you’d have to test it using your own code to really know.


If you use a WDDM driver you’ll still be hitting all those limitation that have been discussed over and over which can severely cripple performance. Otherwise, the pure kernel performance should be the same, I think.

I’m also think that Linux and windows performances are same if only using the GPU

As pszilard says, Windows’s default WDDM driver model adds additional overhead compared to the Linux driver model (or the old WinXP driver model). This may slow down CUDA applications. How much slowdown one sees is very much dependent on the GPU usage pattern of the CUDA application. The CUDA driver tries to mitigate the WDDM overhead, for example by batching operations. However, this can in turn lead to performance artifacts.

There is an alternative Windows driver called the TCC driver which avoids these overheads, and shiould give you comparable performance between Linux and Windows platforms. This driver supports Tesla GPUs:

I seem to recall that the TCC driver also works with certain Quadro models, but I am currently unable to locate confirmning information regarding that, so I may be mistaken.