linux driver 390.59 coolbits/RegistryDwords have no effect.

Every single time, every version upgrade (fedora 28 currently) coolbits/RegistryDwords in xorg.conf stop working. Here is what’s in my xorg.conf (used to work perfectly - after i spend months trying to find correct settings).

Section “Device”
Identifier “nvidia”
Driver “nvidia”
Option “NoLogo” “true”
Option “DPI” “96 x 96”
# Specify Nvidia PCI device
BusID “PCI:1:0:0”
# Make sure X starts also when no outputs are connected to the Nvidia chip
Option “AllowEmptyInitialConfiguration”
Option “eDP-1-0” “eDP-1-0”
Option “Coolbits” “28”
Option “RegistryDwords” “PowerMizerEnable=0x1; PerfLevelSrc=0x2222; PowerMizerLevel=0x3; PowerMizerDefault=0x3; PowerMizerDefaultAC=0x3”
Screen 0
EndSection

I need to lock the card into power savings due to overheating. it’s a GeForce GTX 870M. Any way of doing it?
And i wouldn’t even need to play with xorg.conf if someone at nvidia had a lightbulb moment: “Maybe we should add “powersaving” to “adaptive” and “performance” mode under PowerMizer in GUI setting app?”

Try to add a nvidia script in /etc/X11/xinit.d/ with:

#!/bin/sh

if lsmod | grep nvidia &>/dev/null ; then
    nvidia-settings  -a "GPUPowerMizerMode=3"
fi

Wouldn’t it make more sense to work at the cause, not the symptom, i.e. cleaning the dust from heat spreaders?

@dinosaur_ tried that from command line, get:
$ nvidia-settings -a “GPUPowerMizerMode=3”

Valid values for 'GPUPowerMizerMode' are: 0, 1 and 2.
'GPUPowerMizerMode' can use the following target types: GPU.

If they removed support for that in the driver - that’s the last time i buy anything with nvidia inside.

@generix yes, it would and i did… and i’m not buying this brand of badly designed laptop with failing proprietary fan assembly i can’t even find for sale.
NONE OF WHICH IS THE POINT! I should be able manage power levels on a card no matter how dusty my heat spreaders are, so - do you have anything of value to add to this discussion?

They NEVER removed or changed the GPUPowerMizerMode attribute. You (and apparently the person who provided the code) don’t know what you are doing.

3 is, and never has been, a valid value. Changing GPUPowerMizer mode isn’t even going to help you to begin with - by default the driver will only move to a higher performance level if there is enough GPU utilization to warrant it. There is no way(AFAIK on a newer Pascal card anyway, maybe older cards you can?) to force a specific GPU voltage/clock.

So what can you do?

A. Decrease the GPU’s power limit to its absolute smallest values via nvidia-smi CLI. This will cause your GPU to become power starved and run at a lower clock speed even if utilization is high.

b. underclock the GPU.

These two options maybe cause poor performance and/or strange application behavior, especially if they need to use the GPU heavily.

“b. underclock the GPU.”

fine, so how do i do that? underclock the GPU?

ps Option “RegistryDwords” “PowerMizerEnable=0x1; PerfLevelSrc=0x2222; PowerMizerLevel=0x3; PowerMizerDefault=0x3; PowerMizerDefaultAC=0x3” - used to work at least at some point.

via nvidia-settings:

nvidia-settings -a GPUGraphicsClockOffset[3]=<some-negative-clock>

You can do memory too but graphics is probably going to do a whole lot more. You can find what values are acceptable by using the nvidia-settings GUI or throw in some unrealistic value(like 999999999).

Assuming you can even underclock a mobile GPU, I don’t even know.

Don’t know anything about them, only that Nvidia-settings CLI hasn’t changed any.

I think they moved to the kernel module: https://devtalk.nvidia.com/default/topic/1039521/dramatic-overall-performance-and-heat-generation-with-geforce-gtx-1070-with-max-q-design/?offset=14