As you can see, 1595 is not the correct maximum clock value so it reports incorrectly.
Interestingly, on Windows, Unigine Valley/Heaven both report this value during the benchmark so it’s reading the value from the same place that the linux driver does.
The second bug is that it never actually reaches the max clock value. In Windows, Unigine shows the GPU reaching 1595mhz and without boost,1430mhz. In linux, however, Unigine and the driver will show the GPU reaching 1430, but never actually trigger the boost and get up to what is reported as the “max clock” of 1595.
I’m not sure if these issues are related (I’m guessing not since the first problem is replicated by Unigine’s reporting in windows as well) but there are two definite problems here.
edit: Also the output of:
nvidia-smi -q -d CLOCK
shows
Timestamp : Tue Aug 4 19:10:20 2015
Driver Version : 352.30
Attached GPUs : 1
GPU 0000:02:00.0
Clocks
Graphics : 1430 MHz
SM : 1430 MHz
Memory : 3645 MHz
Applications Clocks
Graphics : 1202 MHz
Memory : 3645 MHz
Default Applications Clocks
Graphics : 1202 MHz
Memory : 3645 MHz
Max Clocks
Graphics : 1594 MHz
SM : 1594 MHz
Memory : 3645 MHz
SM Clock Samples
Duration : 1592.85 sec
Number of Samples : 100
Max : 1430 MHz
Min : 135 MHz
Avg : 996 MHz
Memory Clock Samples
Duration : 1592.85 sec
Number of Samples : 100
Max : 3645 MHz
Min : 405 MHz
Avg : 2942 MHz
Clock Policy
Auto Boost : N/A
Auto Boost Default : N/A
Auto Boost is “N/A”. Trying to enable it using
sudo nvidia-smi --auto-boost-default=ENABLED -i 0
Says:
Enabling/disabling default auto boosted clocks is not supported for GPU: 0000:02:00.0.
Treating as warning and moving on.
All done.
And here’s the output of nvidia-smi -q under 100% load (Running unigine heaven). I left it running for several minutes prior to running this to give an accurate temperature reading. Although it’s a hybrid card so it keeps nice low temps, I’ve never seen it go over 60 degrees (and that was on a hot day)
==============NVSMI LOG==============
Timestamp : Thu Aug 13 14:58:39 2015
Driver Version : 352.30
Attached GPUs : 1
GPU 0000:02:00.0
Product Name : GeForce GTX 980 Ti
Product Brand : GeForce
Display Mode : Enabled
Display Active : Enabled
Persistence Mode : Disabled
Accounting Mode : Disabled
Accounting Mode Buffer Size : 1920
Driver Model
Current : N/A
Pending : N/A
Serial Number : N/A
GPU UUID : GPU-abf1c708-6869-aa93-651d-c3d3c1298b46
Minor Number : 0
VBIOS Version : 84.00.36.00.5B
MultiGPU Board : No
Board ID : 0x200
Inforom Version
Image Version : N/A
OEM Object : N/A
ECC Object : N/A
Power Management Object : N/A
GPU Operation Mode
Current : N/A
Pending : N/A
PCI
Bus : 0x02
Device : 0x00
Domain : 0x0000
Device Id : 0x17C810DE
Bus Id : 0000:02:00.0
Sub System Id : 0x115110DE
GPU Link Info
PCIe Generation
Max : 2
Current : 2
Link Width
Max : 16x
Current : 16x
Bridge Chip
Type : N/A
Firmware : N/A
Replays since reset : 0
Tx Throughput : 26000 KB/s
Rx Throughput : 121000 KB/s
Fan Speed : 75 %
Performance State : P0
Clocks Throttle Reasons
Idle : Not Active
Applications Clocks Setting : Not Active
SW Power Cap : Not Active
HW Slowdown : Not Active
Unknown : Not Active
FB Memory Usage
Total : 6143 MiB
Used : 2296 MiB
Free : 3847 MiB
BAR1 Memory Usage
Total : 256 MiB
Used : 5 MiB
Free : 251 MiB
Compute Mode : Default
Utilization
Gpu : 99 %
Memory : 40 %
Encoder : 0 %
Decoder : 0 %
Ecc Mode
Current : N/A
Pending : N/A
ECC Errors
Volatile
Single Bit
Device Memory : N/A
Register File : N/A
L1 Cache : N/A
L2 Cache : N/A
Texture Memory : N/A
Total : N/A
Double Bit
Device Memory : N/A
Register File : N/A
L1 Cache : N/A
L2 Cache : N/A
Texture Memory : N/A
Total : N/A
Aggregate
Single Bit
Device Memory : N/A
Register File : N/A
L1 Cache : N/A
L2 Cache : N/A
Texture Memory : N/A
Total : N/A
Double Bit
Device Memory : N/A
Register File : N/A
L1 Cache : N/A
L2 Cache : N/A
Texture Memory : N/A
Total : N/A
Retired Pages
Single Bit ECC : N/A
Double Bit ECC : N/A
Pending : N/A
Temperature
GPU Current Temp : 57 C
GPU Shutdown Temp : 97 C
GPU Slowdown Temp : 92 C
Power Readings
Power Management : Supported
Power Draw : 227.28 W
Power Limit : 290.00 W
Default Power Limit : 290.00 W
Enforced Power Limit : 290.00 W
Min Power Limit : 150.00 W
Max Power Limit : 310.00 W
Clocks
Graphics : 1430 MHz
SM : 1430 MHz
Memory : 3645 MHz
Applications Clocks
Graphics : 1202 MHz
Memory : 3645 MHz
Default Applications Clocks
Graphics : 1202 MHz
Memory : 3645 MHz
Max Clocks
Graphics : 1594 MHz
SM : 1594 MHz
Memory : 3645 MHz
Clock Policy
Auto Boost : N/A
Auto Boost Default : N/A
Processes
Process ID : 611
Type : G
Name : /usr/lib/xorg-server/Xorg
Used GPU Memory : 938 MiB
Process ID : 1012
Type : G
Name : kwin
Used GPU Memory : 182 MiB
Process ID : 1384
Type : G
Name : /usr/lib/chromium/chromium --type=gpu-process --channel=1354.0.541516478 --v8-natives-passed-by-fd --v8-snapshot-passed-by-fd --disable-breakpad --supports-dual-gpus=false --gpu-driver-bug-workarounds=2,29,32,45,55,57 --gpu-vendor-id=0x10de --gpu-device-id=0x17c8 --gpu-driver-vendor=NVIDIA --gpu-driver-version=352.30 --v8-natives-passed-by-fd --v8-snapshot-passed-by-fd
Used GPU Memory : 117 MiB
Process ID : 1739
Type : G
Name : /usr/bin/nvidia-settings
Used GPU Memory : 6 MiB
Process ID : 29766
Type : G
Name : ./heaven_x64
Used GPU Memory : 1023 MiB
Thanks for your help :)
edit: For completeness here is the rest of my system:
CPU: X58 Xeon X5670 @ 3.8ghz
Motherboard: Asus P6T-SE
RAM: 32GB 1600mhz DDR3
OS: Arch Linux Kernel 4.1.4
Nvidia driver: 352.30
Power supply: EVGA Supernova G2 750w
I don’t have any other GPUs installed, although I did have a 750Ti in there before I replaced it with the 980Ti
After a quick look at nvidia-smi -h. I’m guessing you meant nvidia-smi-q -debug=nvsmi.log. I’ve attached the output of this. I don’t seem to be able to attach files so I’ve uploaded it here:
edit: Nevermind I assumed I’d be able to attach a file as part of creating a post, I didn’t realise you had to create a post and then attach a file to it, I’ve also attached the file. nvsmi.log (247 KB)
Sorry for the late response. The forums didn’t email me when you eventually replied.
Thanks for the log. I can see why your GPU isn’t at maximum clocks from the log you provided.
What I can say is that you’re are maxing out your board specs. The good news is you don’t have a thermal issue, and your work load is able to fully utilize the card to its full potential. You’re just hitting that full potential :)
The driver is written differently, by different people, and clockspeed, in 2015, isn’t necessarily a universal concept for something as complex as a GPU – any numbers that you read on the box should only be taken to be representative of the manufacturer’s supported configuration under Windows. Nvidia Linux drivers tend not to respect different frequencies set by other OEMs, so if you have an EVGA or a Gigabyte “factory OC” you shouldn’t put too much stock in it.
You can get PowerMizer to provide additional overclocking options if you want by making an xorg.conf file like I describe in this post:
But generally, you shouldn’t expect this stuff to be 1:1 with Windows; it’s a completely different software environment, and it’s the target of virtually 100% of the commercial benchmarks and whatnot. It doesn’t mean your card is necessarily “underperforming” in Linux.