1080FE powersave frequency scaling

With 1080FE the core clock is no longer fixed because of powersave functionality. That’s fine, but for me as developer that’s very problematic.

I can no longer compare optimizations I do to my CUDA and OpenCL kernels because the performance changes no longer just because of my code changes but also because of wild jumping core clock.

Is there a way to turn off this “feature”? I’m on Ubuntu 16.04 using NVidia drivers 375.20.

Check whether nvidia-smi lets you set application clocks on your GPU. Query the supported clocks with

nvidia-smi -q -i 0 -d SUPPORTED_CLOCKS

According to a recent thread, you may need the very latest Linux drivers to be able to successfully set application clocks on a GTX 1080. The application clocks are set with the -ac switch of nvidia-smi. This is a privileged operation, you may need to use sudo when you set the clocks.

When you set application clocks to the higher values supported, I would strongly suggest to also raise the “enforced power limit” to the maximum value supported to avoid unexpectedly getting throttled due to hitting the default power limit (on my Quadro K2200, certain Folding@Home kernels would run into this).

[Later:] You may find the following NVIDIA blog post helpful when dealing with application clocks. While written with specific reference to K80, the information seems generally applicable to all GPUs that support application clocks.

https://devblogs.nvidia.com/parallelforall/increase-performance-gpu-boost-k80-autoboost/

Thanks njuffa for your response!

I’ve played a bit with the options which you’ve mentioned, but it seems there’s no change. Basically because all of them are not supported with 1080 and that’s exactly what this topic is about.

With any other (older) GTX models they work fine and as expected. It’s just not the 1080 and from what I’ve found on other sites discussions the 1070 as well.

Of course I’ve updated to the latest driver before testing. I’ve even switched through different versions. I’ve tested the following:

  • 375.20 (latest release)
  • 375.10 (latest beta)
  • 367.x (version from cuda8 developer package)

The reponse is almost the same:

root@et:~# nvidia-smi -q -i 0 -d SUPPORTED_CLOCKS

==============NVSMI LOG==============

Timestamp                           : Sun Nov 27 11:47:37 2016
Driver Version                      : 375.20

Attached GPUs                       : 3
GPU 0000:01:00.0
    Supported Clocks                : N/A
root@et:~# nvidia-smi -ac 4315,1600 -i 0        
Setting applications clocks is not supported for GPU 0000:01:00.0.
Treating as warning and moving on.
All done.

This doesn’t mean overclock doesn’t work. It does:

root@et:~# nvidia-settings -a GPUGraphicsClockOffset[3]=200

  Attribute 'GPUGraphicsClockOffset' (et:0.0) assigned value 200.
  Attribute 'GPUGraphicsClockOffset' (et:0.1) assigned value 200.
  Attribute 'GPUGraphicsClockOffset' (et:0.2) assigned value 200.
  Attribute 'GPUGraphicsClockOffset' (et:0[gpu:0]) assigned value 200.
  Attribute 'GPUGraphicsClockOffset' (et:0[gpu:1]) assigned value 200.
  Attribute 'GPUGraphicsClockOffset' (et:0[gpu:2]) assigned value 200.

But that’s a relative overclocking setting.

What I’m looking for is to set a fixed clock so that I can test my kernel modification results.

I understand your frustration quite well. When (very rudimentary) automatic clock management for GPUs was first introduced years ago for power management reasons, I vehemently insisted that a way to turn it off be provided to facilitate proper performance comparison of CUDA applications, as tweaks to compiler and libraries were being applied.

Note that the nvidia-smi command-line argument -i 0 targets a specific GPU (the one assigned ordinal 0). The log shown above indicates you have three GPUs. Are they all GTX 1080? If not, it is not at all guaranteed that -i 0 targets your GTX 1080. I probably should have pointed this out, but the original question led me to believe there is only a single GPU in the system.

I cannot tell you whether applications clocks should work on GTX 1080, I only know that, in general, application clocks are supported for some GPUs but not others. If you believe the behavior you observe is due to a bug in nvidia-smi or the driver, consider filing a bug report with NVIDIA.

You’re right it’s three GTX1080 in this system. I just posted it for one as the commands above will not work with any of them.

I fully agree with you, there should be a way to turn off automatic clock management to facilitate proper performance comparison of CUDA and OpenCL applications, as tweaks to compiler and libraries were being applied.

I’m afraid we’re kind of stuck here, only Nvidia can help.