I’ve been benchmarking some programs (2D and 3D lattice boltzmann solvers) and have come across something unusual; I would expect some random variation in the performance of the solver over time, but over a variety of different problem sizes, block sizes and GPUs (not to mention that the 2D and 3D codes are completely different an not different configurations of the same program) I can see a very clear sinusoidal fluctuation in kernel execution times. For the two GPUs I’ve tested on (K5000m and K20c) the variation seems to have a frequency in the 10-12Hz range.
Is there any known explanation for this? My go-to idea is thermal/power management but I’ve not been able to prove it. Has anyone else experienced this?