I’ve noticed that, if I kill a Windows command-line CUDA program with Ctrl-C, sometimes it screws up the GPU and leaves it in a reduced clock state. This is NVIDIA GeForce GTX 560, default clocks are 1701 (shader) / 2052 (memory). After it bugs out, shaders go down to 810 and memory clock goes down to 324. Performance is reduced correspondingly. I have not yet figured out a way to reset the clocks to their proper levels (other than reboot). I have an overclocking utility (EVGA Precision), but it has no effect in the reduced-clock state.
Has anyone else seen this behavior? Is there any way to fix the bugged GPU without rebooting?
I see this on Win7 x64, CUDA 4.0.17, video driver 280.26. I don’t recall seeing the bug with 270.xx, but I’m not 100% positive on that.
Have sth similar on a XP32 workstation, with GTX280 (drivers 275.33).
I have a “GPU compute only” app (i.e. without graphic/GL, just opening a windows form) that runs very slow unless I open another app that uses graphic/GL: I could not check the clocks, but the exact same computation goes from over 10s to the expected below 5s (that I was getting before with older drivers, around 18x.xx versions).
Since I had this “workaround” (…and I no longer have access to this machine), I did not file a bug report for that issue, but that might be related with what you are getting: I’d be glad to get that fixed.
Did you try running another (unrelated) graphic/GL/DirectX app (in my case, it just needs to be initialized, it can be “idle” and not consume any gpu time).
That is something that has happened to me when I’ve overclocked my GTX 570, although it doesn’t happen with CUDA exclusively but with games too.
The cause is the overclocking, it might need higher voltage. The problem never happens to me when the clock is at factory levels.