Hi everyone. I have a Lenovo Legion 5 with a GeForce RTX 4060. According to the specifications, the video card should draw 140 W. However, in my Linux installation (Debian 13, proprietary drivers updated to version 535), the power consumption is set to 60 W, which is less than half of what it draws on Windows.
This happens even when using the original 230 W charger provided.
In previous versions of the driver, it was possible to adjust this value manually by varying the power limit, but starting from version 530, this is no longer possible.
This is a very serious bug as it is halving the performance of my PC. How can this be resolved?
To give an example, this is a portion of the output of nvidia-smi -q
GPU Power Readings
Power Draw : 1.71 W
Current Power Limit : 60.00 W
Requested Power Limit : 60.00 W
Default Power Limit : 60.00 W
Min Power Limit : 5.00 W
Max Power Limit : 140.00 W
Any logs from nvidia-powerd.service
?
Actually it doesn’t work. I installed nvidia-powerd, I checked that the version would the same as the Nvidia driver and it is, but I cannot enable the service. I can’t understand why. By the way, it should works only with minimum variation of the wattage. From other forums I see it boost 15 W or less when required. But here we are speaking of 60w Vs 140 w. Indeed in windows I had 125 w upon 140, reaching 140 only few times thanks (I guess) to Nvidia-powerd
If nvidia-powerd
is not enabled, then (for now) that is the root cause. Your GPU is reserved 60W of power and the rest is shared between it and CPU. The nvidia-powerd
service is what determines how much power allocation your GPU will have (potentially up to 140W).
If it is not enabled (or improperly functioning), then that is what will cause your GPU to remain at the default power limit.
Can you go into more detail on why you can’t enabled the nvidia-powerd
service?
Please check if /sys/firmware/acpi/platform_profile_choices
exists, then install power-profiles-daemon and switch to performance mode.
Having a similar issue. I had to copy /usr/share/doc/nvidia-kernel-common-555 to /lib/systemd/system as it doesn’t seem nvidia-powerd
was copied over when the driver package was installed. Attempting to enable/start nvidia-powerd yields an “Found unsupported configuration. Exiting…” error.
Worth noting I have a 3060 and not a 4060 as this thread is slated for but the issue is similar. I’m essentially stuck in low-power mode.
I already have power-profiles-daemon installed, but don’t have a sys/firmware/acpi/platform_profile_choices
present.
Does your system meet the requirements posted on this page? Chapter 23. Dynamic Boost on Linux
I don’t have a notebook, that’s the only requirement I don’t meet. Does this mean under 555 drivers I’m essentially stuck in low-power mode??
nvidia-powerd
would not be applicable for you in that case.
Can you quantify what “low-power mode” means?
Running nvidia-smi -q -d POWER
would be helpful
After updating to 555, no application is able to go above 25-30w of consumption. My power rating is set to 170w.
A game like World of Warcraft shows the GPU in [Low Power] mode. Nothing I do pushes the GPU beyond this low power band, its behaving like its in low power mode despite my nvidia-smi output saying the contrary.
Timestamp : Sat Jun 1 13:54:13 2024
Driver Version : 555.42.02
CUDA Version : 12.5
Attached GPUs : 1
GPU 00000000:26:00.0
GPU Power Readings
Power Draw : 18.92 W
Current Power Limit : 170.00 W
Requested Power Limit : 170.00 W
Default Power Limit : 170.00 W
Min Power Limit : 100.00 W
Max Power Limit : 187.00 W
Power Samples
Duration : 3.68 sec
Number of Samples : 119
Max : 22.54 W
Min : 18.80 W
Avg : 20.07 W
GPU Memory Power Readings
I tested this further by loading World of Warcraft and Minecraft while running watch nvidia-smi -q -d POWER
. It peaked at 22w, as shown by the report output above.
Essentially, any game that uses the GPU under Wayland and version 555 experiences severe frame hitching and low performance. I still have poor performance under X11 and 555, but I at least can get a stable framerate.
Here’s a view of nvtop under load:
This should probably be reported as a regression then, since you do note that this started happened when updating to r555 (assuming nothing has been changed otherwise).
Is the best way to report a regression via creating a new thread? Or responding in the “Feedback & Discussion” thread?
I would send an email to linux-bugs@nvidia.com