Hello, I have a 1070 GTX graphics card which I am trying to set up for cryptocurrency mining. I have run into an issue where the nvidia-settings utility is not changing the graphics card attributes. This is preventing me from overclocking the card to maximize performance.
I am able to query attributes like GPUPowerMizerMode, but upon assigning a value and querying again, the value is unchanged.
$ nvidia-settings -q [gpu:0]/GPUPowerMizerMode
Attribute 'GPUPowerMizerMode' (crypt0:0[gpu:0]): 2.
Valid values for 'GPUPowerMizerMode' are: 0, 1 and 2.
'GPUPowerMizerMode' can use the following target types: GPU.
$ nvidia-settings -a "[gpu:0]/GPUPowerMizerMode=1"
Attribute 'GPUPowerMizerMode' (crypt0:0[gpu:0]) assigned value 1.
$ nvidia-settings -q [gpu:0]/GPUPowerMizerMode
Attribute 'GPUPowerMizerMode' (crypt0:0[gpu:0]): 2.
Valid values for 'GPUPowerMizerMode' are: 0, 1 and 2.
'GPUPowerMizerMode' can use the following target types: GPU.
When I try to assign a GPU clock offset it seems to work
$ nvidia-settings -a [gpu:0]/GPUGraphicsClockOffsetAllPerformanceLevels=200
Attribute 'GPUGraphicsClockOffsetAllPerformanceLevels' (crypt0:0[gpu:0]) assigned value 200.
$ nvidia-settings -q [gpu:0]/GPUGraphicsClockOffsetAllPerformanceLevels
Attribute 'GPUGraphicsClockOffsetAllPerformanceLevels' (crypt0:0[gpu:0]): 200.
The valid values for 'GPUGraphicsClockOffsetAllPerformanceLevels' are in the range -200 - 1200 (inclusive).
'GPUGraphicsClockOffsetAllPerformanceLevels' can use the following target types: X Screen, GPU.
And the clock speeds as reported by nvidia-smi -q -d CLOCK
update appropriately.
Why isn’t PowerMizer updating? I believe I need this set to 1 to enable maximum performance mode as shown in this article. Though, I admit I am still a little confused what this does. From what I read it seems like this would allow the GPU to draw more power. Right now the wattage sits around 140, with a maximum wattage set to 170. Overclocking the memory and core clocks doesn’t really raise the 140W the card draws yet the hashrate plateaus, leaving a wasted 30W potential. I believe changing PowerMizer will remove this throttle. Is this correct?
I am running Ubuntu Server 20.04 headless. I do not have a desktop environment and do not want to install one, so any solutions need to be through the command line only. From what I understand based on this post, nvidia-settings must have an X server running. I have generated a config using nvidia-xconfig --enable-all-gpus --cool-bits=28 --allow-empty-initial-configuration
. After generating the config I added the RegistryDwords line to xorg.conf
as shown below.
Section "Screen"
Identifier "Screen0"
Device "Device0"
Monitor "Monitor0"
DefaultDepth 24
Option "AllowEmptyInitialConfiguration" "True"
Option "Coolbits" "28"
Option "RegistryDwords" "PowerMizerEnable=0x1; PerfLevelSrc=0x2222; PowerMizerDefaultAC=0x1"
SubSection "Display"
Depth 24
EndSubSection
EndSection
I would imagine once X is started and it reads in the config, the PowerMizerMode would be set to 1 as it is configured above, but it doesn’t. To start the server so nvidia-settings doesn’t error, I run the following after connecting via ssh.
$ export DISPLAY=:0
$ sudo X :0 &
I am then able to make queries and try to change GPU settings, where I run into my problem with PowerMizer. How can I set PowerMizer to maximum performance mode?
Version/hardware info:
GeForce GTX 1070 EVGA SuperClocked
Ubuntu Server 20.04.2 LTS
nvidia-settings 440.82
nvidia driver 460.32.03
cuda version 11.2