Kernel module option NVreg_RegistryDwords for PowerMizerEnable doesnt work on 530.41.03

I’m using a 1080ti on Arch Linux.

I’ve been fighting with /etc/modprobe.d/nvidia.conf file to limit performance level to minimum.

Turns out it works fine on driver 520.56.06 but has no effect on 530.41.03.

Contents of the nvidia.conf file;
options nvidia NVreg_RegistryDwords=“PowerMizerEnable=0x1;PerfLevelSrc=0x2222;PowerMizerDefault=0x2;PowerMizerDefaultAC=0x2;PowerMizerLevel=0x2”

Same here.

Gentoo Linux
Kernel 5.15.104
X.Org 21.1.7

/etc/modprobe.d/99-local.conf:

options nvidia \
        NVreg_RegistryDwords="PowerMizerEnable=0x1; PerfLevelSrc=0x3333; PowerMizerLevel=0x2; PowerMizerDefault=0x2; PowerMizerDefaultAC=0x2"

Worked fine with any previous driver. Clocks are low, temp never exceeds 45C and the fans stay off… With 530.41.03, the GPU clocks up to max and the fans start spinning since GPU reaches 60C pretty much all the time.

nvidia-bug-report.log.gz (337.7 KB)

Same problem with 535.54.03.

The last working driver is 525.116.04.

Same with 535.54.03

Same with 535.98.

Apparently, this functionality has been completely removed after 525. Grepping the unpacked nvidia driver for “PowerMizerEnable” doesn’t find anything anymore.

Why? Me (and others) have been depending on this functionality for many years now. You can’t just remove it as if it’s nothing :-/ What are we gonna do now? You’re locking us into the 525 driver forever, and at some point that driver is going to stop working with new kernels. Why are you doing this?

I play games that are not heavy on the gpu like LoL and WoW.
Setting cpu to powersaver and limiting gpu power usage with powermizer trims the power bill.

I also do some graphics programming and checking the performance of the application when gpu power use is fixed is very convenient.

So I’d really like if it worked again on newer versions of the driver.
525 wouldn’t work with newer arch kernels so i had to downgrade the kernel.

I had to search and install these manually to make it work.

lib32-nvidia-utils-525.60.11-1-x86_64.pkg.tar.zst
linux-zen-6.1.9.zen1-1-x86_64.pkg.tar.zst
linux-zen-headers-6.1.9.zen1-1-x86_64.pkg.tar.zst
nvidia-dkms-525.60.11-1-x86_64.pkg.tar.zst
nvidia-settings-525.60.11-1-x86_64.pkg.tar.zst
nvidia-utils-525.60.11-1-x86_64.pkg.tar.zst

I’ll just bump this.

We have filed a bug 4282064 internally for tracking purpose.
Will keep posted on it.

A workaround I found for now is to use nvidia-smi to set max clocks for core and memory. First, get the supported clocks with:

nvidia-smi -q -d SUPPORTED_CLOCKS

Before changing the clocks, you need to enable persistence mode on the GPU. I’m not sure why, but apparently the driver can forget the clock settings after a while:

sudo nvidia-smi -pm ENABLED

Note that for memory, you use the actual clock, not the effective one. For example if nvidia-settings showed 1620MHz memory clock before, nvidia-smi reports this as 810. So here I want to limit the maximum clocks to 405 core and 1620 memory, so I use:

sudo nvidia-smi -i 0 -ac 810,405

The -i parameter is the GPU you want to change, numbered from 0.

You can only use clock values reported by nvidia-smi. You can’t use arbitrary values.

This needs to be done on each boot. Alternatively, you can set up the nvidia-persistenced service in systemd. I haven’t done that, but I believe this will remember GPU configuration between reboots and restore the values automatically.

Unfortunately, the clocks don’t seem to be limited exactly to the values I gave. With the previous method, core clock would only max out at 405MHz. Now, it goes up to 670Mhz. But it’s still better than it going all the way up into the GHz range.

Running games in Wine or Proton will ignore this. I don’t know why. Even if I run a lightweight or old game where low clocks are perfectly fine, the GPU will jump to max clocks anyway. So this is not a real workaround. It’s a “better than nothing” solution.

1 Like

Hi All,
I need your help to understand the exact requirement so that I can pass the same to higher management for smarter solution.

We want to be able to set the performance level manually.
For example, i want to lock performance level at 0.
Some other user might want to lock performance level at 3.

The exact requirement for me is to have the graphics card run with its fans off. This reduces noise while I work.

The card itself automatically keeps the fans disabled when the GPU temperature is low enough and thus it’s safe for it run with passive cooling. In my case, the card’s BIOS uses 60 degrees Celcius as the safe limit.

To keep the GPU under that temperature limit, I previously used the NVreg_RegistryDwords kernel module option to limit the maximum power level the GPU can operate in. If I limited it to level 1 for example, the card would idle at level 0, and when there’s load put on the GPU, it would jump up to level 1, but not higher. This kept the GPU temperature under 50 degrees Celsius at all times, while still providing enough performance for tasks that aren’t gaming related. If I want to actually play a demanding game, I dual boot to Windows. While I work, I run Linux and I need my workstation to be silent and also not waste power.

In recent drivers, the ability to limit the power level through NVreg_RegistryDwords was completely removed, without offering any working alternative. The GPU will jump to its maximum power level and it’s impossible to keep it passively cooled. Limiting power consumption through the nvidia-smi utility that comes with the drivers does NOT work. It only allows a minimum power limit if 150W, which isn’t remotely low enough to keep the card cool. The previous method of limiting the GPU to power level 1 resulted in a power consumption between 25W and 40W, which kept the card cool and silent.

So an alternative is needed to configure the maximum power level the GPU is allowed to use. Perhaps through nvidia-settings.

The “smarter” solution already exists. You just need to open-source it and distros need to package the driver correctly.

@BlueGoliath Apologies for hijacking the thread. Your other project, Envious-FX — is it something that can be ported to Linux?

I don’t have any plans to release a Linux version.

May I inquire why?