Time dependent variation in kernel performance?

Hi all,

I’ve been benchmarking some programs (2D and 3D lattice boltzmann solvers) and have come across something unusual; I would expect some random variation in the performance of the solver over time, but over a variety of different problem sizes, block sizes and GPUs (not to mention that the 2D and 3D codes are completely different an not different configurations of the same program) I can see a very clear sinusoidal fluctuation in kernel execution times. For the two GPUs I’ve tested on (K5000m and K20c) the variation seems to have a frequency in the 10-12Hz range.

Is there any known explanation for this? My go-to idea is thermal/power management but I’ve not been able to prove it. Has anyone else experienced this?

Thanks.

Do the Quadro and Tesla cards have variable (“boost”) clock rates like the GeForce cards? The GeForce cards can have up to 15% variation in clock rate depending on current thermal load.

A quick google of the specs doesn’t reveal anything useful about whether or not they do. In factm given that it’s mentioned explicitly on the spec page for the Geforce cards but not the Quadro/Tesla leads me to believe that they don’t.

mjmawson, are you running your code under Windows/Linux? If Windows, GPU-z has a logging function that will poll the GPU every second and give you clock rates, power usage, etc and save it to a file that you can look at. That’ll show if your clock speeds are changing between runs.

If you are under Linux, you can accomplish the same thing via scripting nvidia-smi output.