680 gtx core clock idle

I have heard that the newer nvidia GPUs do not always stay at the base clock speed, and will idle at a lower speed. I have observed this via GPU-z and MSI afterburner. The 680 will usually idle around 324 mhz then go up when I run some cuda code.

But it seems it never really gets up to that base level, just pops to the 500-900 range (when it should be over 1000 mhz. Also the PCI-e bus speed will drop down to 1.1 from 3.0 when idle.

I just want to make sure that this is correct and that when I need the full clock speed I will get it even if it is idling at a lower clock.

Also is there any way to set it higher manually and keep it from doing this?

It works great for games, but I am not seeing the cuda speed increase in Visual Studio 2010(C++)when compared to my old 460 gpu.

Then device query and bandwidth tests are inline with expectations, but I just want to make sure the power is there and available when I need to call it from a .cu executable.

Thanks!

I don’t know how things work on Windows, but I’ve never seen any problem with the power level on the GTX 680 getting stuck in Linux.

If you haven’t already, you should reoptimize the block size of your CUDA code for the GTX 680. I made the mistake initially of using the same block size for both the GTX 580 and 680, which was handicapping the 680 performance quite a bit.

My observation is that it takes about 250 ms. of heavy kernel executions for the card to achieve full “boost” MHz. Shorter duration workloads only seem to raise the MHz to the default speed. This is on a GTX 680 4GB.

Once the card is fully boosted it seems to remain in that state for ~40 seconds (or less). So there is a lot of hysteresis.

EVGA’s Precision X utility has a feature called “K-Boost” that appears to let you lock your card’s MHz. It requires a reboot. I tried it out thinking it would be good for benchmarking but it’s just as easy to prod my GTX 680 with some work to get it into boost mode before launching a benchmarking run.

Thanks. I see that I can change the ‘voltage’ through EVGA’s Precision X utility, and am guessing that this directly affects the base mhz clock. Does it also ramp up the bus speed and the memory clock to the non-idle levels?

Also the Precision X utility says that I must disable SLI to set the clock speed manually, which is fine I guess but I am not sure how that might affect other operations related to the GPU.

So much to learn but the payoff seems to be worth the effort!

I am under the impression that adjusting the voltage doesn’t actually do anything unless you have flashed the card with a triple secret special BIOS. The hardware already auto-adjusts voltage.

Adjusting GPU and memory MHz through any of these tuners seems to work very well though.

To be clear on K-Boost, I thought it was a hassle for CUDA development because it leaves your card locked to a specific speed (and thus running hot nonstop). I only used it once and reverted back once I determined it wasn’t that useful. The rebooting is not worth the trouble. Prodding the card before running a short kernel benchmark is just as easy. :)

However, one thing that was cool to see was downclocking Kepler in order to see how my kernels performed. Kepler is pretty awesome.