NVIDIA driver has a habit of keeping my GPU at the highest performance level

I haven’t used linux in a few months, today I tried again, after updating arch, I still have the same problems

my 980 graphics card is kept at the highest level for simple tasks, for example with gnome or kde desktop animations

if i run nvidia-smi it shows the card in p0 with a consumption of 50-70w

in windows this does not happen, I can see multiple videos in hd, with multiple screens on and never more than 20w

Do you have more than one monitor attached?

Yes, I have two monitors
I’ve tried turning off one monitor and having two monitors on

the problem is the same in both cases

as i said before, in windows this problem doesn’t happen
i can see a hd video with mpv and nvcodec and at the same time see a twitch video at 1080p 60fps with gpu acceleration activated in chromium, and if i run nvidia-smi my gpu will always be on p8 and 20w

this problem i have had with debian, ubuntu, manjaro and arch, currently i use arch, also i have tried gnome, kde, currently i use budgie

I can’t help you, but for anybody being able to, you’d need to upload a debug log.
run nvidia-bug-report.sh as root.

here’s a little test
two monitors on
chromium with two tabs, playing a youtube video at 720p 60fps

nvidia-smi
Thu Apr 2 14:08:53 2020
±----------------------------------------------------------------------------+
| NVIDIA-SMI 440.66.03 Driver Version: 440.66.03 CUDA Version: 10.2 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 980 Off | 00000000:01:00.0 On | N/A |
| 16% 33C P0 54W / 195W | 203MiB / 4040MiB | 8% Default |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 661 G /usr/lib/Xorg 105MiB |
| 0 987 G budgie-wm 21MiB |
| 0 12124 G …AAAAAAAAAAAAAAgAAAAAAAAA --shared-files 71MiB |
±----------------------------------------------------------------------------+
As I said, this happens with simple tasks such as desktop animations, watching a video, etc

nvidia-bug-report.log (740.0 KB)

same test in windows
but this time, chromium with 5 tabs
youtube video at 1080_60fps and also an hd video with mpv using nvenc as codec
nvidia-smi.exe
Thu Apr 02 12:29:32 2020
±----------------------------------------------------------------------------+ | NVIDIA-SMI 442.59 Driver Version: 442.59 CUDA Version: 10.2 | |-------------------------------±---------------------±---------------------+ | GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 980 WDDM | 00000000:01:00.0 On | N/A | | 14% 29C P8 22W / 195W | 201MiB / 4096MiB | 25% Default | ±------------------------------±---------------------±---------------------+ ±----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 396 C+G Insufficient Permissions N/A | | 0 2600 C+G C:\Program Files\mpv\mpv.exe N/A | | 0 3168 C+G C:\Windows\explorer.exe N/A | | 0 3520 C+G …t_cw5n1h2txyewy\ShellExperienceHost.exe N/A | | 0 3596 C+G C:\Program Files\Chrome\chrome.exe N/A | | 0 3684 C+G …dows.Cortana_cw5n1h2txyewy\SearchUI.exe N/A | | 0 5588 C+G …mmersiveControlPanel\SystemSettings.exe N/A | ±----------------------------------------------------------------------------+

It’s a long standing issue, the Linux driver is throttling up instantly but then rarely ever throttles down, see:
https://forums.developer.nvidia.com/t/if-you-have-gpu-clock-boost-problems-please-try-gl-experimentalperfstrategy-1/71762/22
Doesn’t really help, though.

I see, but I don’t understand.
This should be a high priority for any company
I also don’t understand why there are no more threads claiming a solution, the last answer in that forum was in August last year, for me it’s a serious mistake, I’m exaggerating?

No, you’re not. I guess most people simply don’t care for power consumption when installing some high-end gpu.
As a sidenote (I think I mentioned it also in that thread), when using render offload, the throttling does work. So might also be something related to Xorg, IDK, pure speculation.

thank you for the answer
I thought I could be the composer, but I tried others, like kde or compiz, but they were all the same.

Maybe I’ll try wayland, see if it’s xorg’s fault

in windows with overwatch open, in the game menu, my gpu consumes less energy than in linux watching a video

±----------------------------------------------------------------------------+
| NVIDIA-SMI 442.59 Driver Version: 442.59 CUDA Version: 10.2 |
|-------------------------------±---------------------±---------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 980 WDDM | 00000000:01:00.0 On | N/A |
| 26% 36C P5 36W / 195W | 528MiB / 4096MiB | 25% Default |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 396 C+G Insufficient Permissions N/A |
| 0 3168 C+G C:\Windows\explorer.exe N/A |
| 0 3220 C+G C:\Program Files\Chrome\chrome.exe N/A |
| 0 3520 C+G …t_cw5n1h2txyewy\ShellExperienceHost.exe N/A |
| 0 3684 C+G …dows.Cortana_cw5n1h2txyewy\SearchUI.exe N/A |
| 0 3960 C+G D:\Overwatch_retail_\Overwatch.exe N/A |
| 0 5288 C+G …mmersiveControlPanel\SystemSettings.exe N/A |
±----------------------------------------------------------------------------+

on windows with open overwatch running at 124 fps my gpu consumes the same as watching a video on linux

nvidia-smi.exe
Thu Apr 02 23:34:40 2020
±----------------------------------------------------------------------------+
| NVIDIA-SMI 442.59 Driver Version: 442.59 CUDA Version: 10.2 |
|-------------------------------±---------------------±---------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 980 WDDM | 00000000:01:00.0 On | N/A |
| 26% 39C P0 60W / 195W | 977MiB / 4096MiB | 16% Default |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 396 C+G Insufficient Permissions N/A |
| 0 3168 C+G C:\Windows\explorer.exe N/A |
| 0 3520 C+G …t_cw5n1h2txyewy\ShellExperienceHost.exe N/A |
| 0 3684 C+G …dows.Cortana_cw5n1h2txyewy\SearchUI.exe N/A |
| 0 3960 C+G D:\Overwatch_retail_\Overwatch.exe N/A |
| 0 5288 C+G …mmersiveControlPanel\SystemSettings.exe N/A |
±----------------------------------------------------------------------------+

this is the consumption of linux watching a video at 720p in chromium
±----------------------------------------------------------------------------+
| NVIDIA-SMI 440.66.03 Driver Version: 440.66.03 CUDA Version: 10.2 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 980 Off | 00000000:01:00.0 On | N/A |
| 16% 33C P0 54W / 195W | 203MiB / 4040MiB | 8% Default |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 661 G /usr/lib/Xorg 105MiB |
| 0 987 G budgie-wm 21MiB |
| 0 12124 G …AAAAAAAAAAAAAAgAAAAAAAAA --shared-files 71MiB |
±----------------------------------------------------------------------------+

Ok I see the game is running at 60 fps, my fault, when the window is not the focus the fps goes down to 60

I’m sorry I didn’t see it.

I also apologize for my way of expressing myself, my maternal language is not English, but I hope you understand what I am saying

mpv here, with a medium quality video in hd
using nvenc and vo=gpu
I’m just running a video, with no other background apps

100%•75%•50%
nvidia-bug-report.log (677.6 KB)

now I’m going to do the same test on windows
same video same configuration for mpv

100%•75%•50%

Do you need more proof? I’ll be happy to do it.
but please don’t ignore this any longer.

FYI, I looked into this log ago, it’s also happening on a plain Xserver with no WM at all.

wayland is hope, but I don’t think

Anyway, as you said before, it seems no one really cares
For me, though, it was the reason I quit linux

I’ve tried this and it seems to work better

sudo systemctl start nvidia-persistenced.service
sudo nvidia-smi -ac 324,135

now the gpu doesn’t hit p0 when opening a window or in a desktop animation
It also seems to stay on p8 even when watching an hd video

nvidia-smi
Fri Apr 3 17:18:27 2020
±----------------------------------------------------------------------------+
| NVIDIA-SMI 440.66.07 Driver Version: 440.66.07 CUDA Version: 10.2 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 980 On | 00000000:01:00.0 On | N/A |
| 15% 30C P8 23W / 195W | 295MiB / 4040MiB | 9% Default |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 608 G /usr/lib/Xorg 90MiB |
| 0 986 G budgie-wm 12MiB |
| 0 1058 G /usr/bin/nvidia-settings 0MiB |
| 0 1187 G …AAAAAAAAAAAAAAgAAAAAAAAA --shared-files 35MiB |
| 0 2237 C+G mpv 138MiB |
±----------------------------------------------------------------------------+

not everything is perfect
for some reason, the gpu doesn’t stay in idle even when i use nvidia-smi -ac in some circumstances (I’m sorry, I think this is very badly expressed, but I don’t know how to explain it in English)

for example when viewing a video in hd and at the same time navigating with chromium

nvidia-smi
Fri Apr 3 17:36:16 2020
±----------------------------------------------------------------------------+
| NVIDIA-SMI 440.66.07 Driver Version: 440.66.07 CUDA Version: 10.2 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 980 On | 00000000:01:00.0 On | N/A |
| 12% 29C P5 28W / 195W | 361MiB / 4040MiB | 4% Default |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 608 G /usr/lib/Xorg 110MiB |
| 0 986 G budgie-wm 11MiB |
| 0 1058 G /usr/bin/nvidia-settings 0MiB |
| 0 1187 G …AAAAAAAAAAAAAAgAAAAAAAAA --shared-files 82MiB |
| 0 3054 C+G mpv 138MiB |
±----------------------------------------------------------------------------+
however the power used is less than without using nvidia-smi -ac

You’re more lucky than others, setting application clocks is rarely supported on GeForce type cards and only affects compute tasks ‘C’ like cuda-enabled mpv. The ‘G’ (graphics) tasks are still allowed to use highest clocks.

Yeah, for me it’s not a solution, it’s just a patch

the gpu hits p5 with 759/1620 clocks but only consumes 30w

I’m still waiting for an answer from nvidia-dev, this problem needs to be fixed

If anyone wants me to run any other tests, tell me and I’ll do it.

Unfortunately this has been an issue on Linux for years now, with no real fix in sight. It seems to be related to running multiple monitors and happens in Windows as well. In Windows, using NVIDIA Inspector with Multi Display Power Saver will fix the issue, but no such app exists for Linux. As a result, sitting on the desktop idle with 0% GPU utilization, on Windows my 1080Ti draws 12W, whereas on Linux it draws 66W. An experimental flag was added in driver 418.56, but at least for me on 440.82, it doesn’t seem to work and the GPU is still pegged at its max frequency.

After upgrading to an RTX 3080 Ti, I am also affected by this and it’s very annoying because it causes my GPU fans to be loud during desktop usage. It does seem to be related to multiple monitors and/or refresh rate. I have four 1440p monitors and tested various configurations:

RTX 3080, driver 465.24.02, 4 monitors: worked, powermizer would bounce between level 0 and 1 on idle desktop. (no longer have this card)

RTX 3080 Ti, driver 470.57.02, 4 monitors (1@144hz, 3@75hz): stays at highest level, uses 93w on idle desktop, GPU temp 45c, fans loud

RTX 3080 Ti, driver 470.57.02, 1 monitor at 144hz: works, drops to level 0 on desktop, uses 28w, GPU temp 32c, fans silent

RTX 3080 Ti, driver 470.57.02, 2 monitors at 144hz and 75hz: stuck at highest level / 93w / 45c / loud

RTX 3080 Ti, driver 470.57.02, 2 monitors at 60hz: works, drops to level 0 on desktop

RTX 3080 Ti, driver 470.57.02, 3 monitors at 60hz: back to being stuck at highest level

So it seems that somewhere around 2 1440p monitors at 60hz is about the highest I can go before powermizer stops working. The 3080 Ti should have no problem idling 4 monitors on the desktop at the lowest power level. I work at my computer all day and it’s wasting 65w + putting out more noise and heat for no reason.

Edit: If I let the computer idle until the displays turn off and then SSH into the box, the GPU is at the lowest power level and using 30w. It’s only stuck at the highest level when the monitors are on.

Yep same thing here with a 3080 driving a 3440x1440 and 2560x1440 both at 120hz. It’s longstanding behavior from the drivers that multi monitor shuts off power saving, and consistent across Windows/Linux.

I was hoping some enterprising tweaker would have made a driver patch to re-enable it, but I haven’t encountered one yet. Will post back if I find anything like this.

Probably the way to go is some tool that directly asserts clock speeds based on running application. Quite unfortunate.

I also have this problem.
With two monitors, the video card works at perfomance level 2
With one monitor it switches perfectly to level 0, as it should be.

RTX 3080
Gentoo
510.68.02 and 510.73.05
5.15.32

Please fix it!

1 Like