[BUG Report] Idle Power Draw is ASTRONOMICAL with RTX 3090

This release still works for me with 2 4K monitors at 60 Hz. The card drops to P8 at idle.

I have noticed an issue with S3 suspend since the 525 series. My machine will fail to freeze when entering S3 suspend every other time. It stays stuck with the fans running. Waking it up and then suspending again will suspend properly.

Can you confirm if you tested it with driver 525.89.02 ?
If yes, please share fresh bug report from repro state.

@amrits, Just tested the latest driver 530.41.03. Unfortunately the high idle power draw comes back. I have a 3070Ti and two identical 165hz monitors. When both monitors refresh at 165hz, the performance level stuck at P0. If I turn down the refresh rates for both monitors to 144hz, the card idles at P5. If one monitor at 144hz and the other at 165hz, the performance level actually comes down to P8.

The fix for high power draw is not available in 530 released driver, hence you are still seeing the issue.
It will be incorporated in future released driver. Meanwhile, please stick with 525.89.02 driver for the same.
Apologize for the inconvenience caused.

Thanks for reporting this issue with 3 display setup.
I can also duplicate the same behavior locally in Linux and Windows platform as well.
I have filed a new bug 4043860 internally for tracking purpose.
Team will check the possibility of improvement and will keep you posted on the same.
Thanks again for all the feedbacks.

1 Like

Probably no need to reply as amrits already reproduced the issue, but I can also see high power consumption with 2 1440p 165Hz displays.

Screenshot_20230329_220348

EDIT: This seems to not occure when using 525.89.02 but it is present in 530.41.03

Screenshot_20230330_231433

As the original discoverer/reporter of this issue, I figured I should chime in since it’s been a while since I’ve commented:

I’m on driver 530.43.01 and the issue actually IS still fixed, albeit in my opinion only partially. What I mean to say is that the GPU should not just force itself to the highest power state at all times just because “Prefer Maximum Performance” is the active PowerMizer mode, and on my system with 530.43.01 (and 525), power draw is pegged at 110W or more 100% of the time if I have “Prefer Maximum Performance” set as the active mode.

To refresh anyone’s memory who needs it, I’m running an RTX 3090 with 2x 2560x1440 165Hz monitors that are the exact same model (so no “one model is running at 165.00Hz while the other is at 164.80Hz” or anything like that)

BUT, when I go into the X Server Settings and change the PowerMizer Mode to Adaptive or Auto, the bug is gone and I’m back down to power usage in the 40W range (assuming I’m not using anything with GPU acceleration like a browser, but even then it “idles” in the 70W range):

1

I’m not sure why no one else seems to be able to achieve this result on the 530 drivers, but I have reproduced it several times, across multiple kernels, and it is now 100% correlated with the PowerMizer mode (back when I reported this and until very recently, power usage was over 100W at all times regardless of PM mode).

On a side note, @kodatarule I’ve known you from the various forums and spoken to you periodically for years now, but for the life of me I can’t recall you ever telling me your exact HW setup, and the reason why I’m asking is because your screenshots are alarming when it comes to temperatures. We both have 3090s, and my GPU runs at around 38-44C with the fans running at 44%, I’ve had this same fan curve for years (I got my card in person at Micro Center on launch day, if you remember me mentioning that), and it’s not like I have one of those INSANELY huge-cooler cards or an AIO, I have an EVGA XC3 Ultra, honestly it’s only slightly bigger than my Gigabyte Gaming OC RX 5700 XT. So it’s not like I have some amazing cooler solution, I just have a quite good case with good airfllow (Phanteks Eclipse P500A).

And that temperature range I gave you was the same even when idle power was always over 100W. And I don’t live in some frigid region, I live in Appalachia and my PC is in a bedroom that is usually slightly above room temperature (because of the PC).

So like, do you have a small form factor build, or a glass-fronted case or something? Because if we have the same GPU (a 3090), our fans are running at the same load (45%-ish), why are your temperatures 10-15C higher than mine? I’m just concerned their might be a problem with your GPU’s heatsink contact/thermal solution or something.

What are your max temps in games? I only hit 70C when gaming for extended periods during the warm seasons, usually I max out around 64C even in games like Cyberpunk.

Anyway if you wanna talk more about that issue and compare notes, you can DM me on reddit or discord or something so we don’t clutter this thread (you know my reddit username is also gardotd426 I’m sure)

1 Like

I was using Ubuntu Desktop 22.04 with 525 driver without issues.

Lately, I did a fresh install of Ubuntu Server 22.04 and enabled the 525 driver during the installation.

Right after the installation is complete, my 2 NVIDIA 3090 cards sit at ~100W / 60C at idle. It is a headless server with zero software installed, just a fresh Ubuntu Server 22.04

Then I tried to upgrade to 530 driver apt install nvidia-driver-530 and now the issue has been fixed. I didn’t have to change any settings, just upgrade to 530 driver and reboot :)

Got similar issue for me, and I managed to fix it by disable hardware-acceleration. xorg - How to disable Hardware Acceleration in Linux? - Unix & Linux Stack Exchange

Now my GPU idle 35-40W - however without hardware-acceleration frame-frame is lower.

NOTE: I am on Ubuntu 22.04, and I’ve tried driver from 515 , 525 and 530 but neither of the driver fix the issue

I have similar issue with high power draw on a 2080ti:

My 2080ti with triple monitors consume about 55W in idle, with GPU clock at around 1200mhz, memory at 1750mhz and video clock at 1080mhz.

If I only connect 1 of the monitors, power draw drops to 16w, and GPU clock at around 300mhz, memory at 100mhz and video clock at 540mhz. My main monitor is a 3440x1440 120hz, but even if I drop to 50hz, the clocks are still the same and power at 13W.

Adding a second 1920x1200 60hz monitor increases the power to 20W, but no changes to clocks. When I connect the 3rd 1920x1200 60hz monitor, power draw goes to 55W and clocks jump back GPU clock at around 1200mhz, memory at 1750mhz and video clock at 1080mhz.

Anything new on this?

I also have the bug with 1x1440p monitor at 165 hz and 2x1080p monitors also at 165Hz on my 3080.
Bringing the frequency to 144hz on the 1080p monitors fixes the issue and the card idles again. Any other setting does not work.
±--------------------------------------------------------------------------------------+
| NVIDIA-SMI 536.67 Driver Version: 536.67 CUDA Version: 12.2 |
|-----------------------------------------±---------------------±---------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 3080 WDDM | 00000000:2B:00.0 On | N/A |
| 30% 39C P5 49W / 370W | 1221MiB / 10240MiB | 28% Default |
| | | N/A |
±----------------------------------------±---------------------±---------------------+

±--------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 2240 C+G …\cef\cef.win7x64\steamwebhelper.exe N/A |
| 0 N/A N/A 5776 C+G C:\Windows\explorer.exe N/A |
| 0 N/A N/A 7136 C+G …CBS_cw5n1h2txyewy\TextInputHost.exe N/A |
| 0 N/A N/A 7992 C+G …Search_cw5n1h2txyewy\SearchApp.exe N/A |
| 0 N/A N/A 10260 C+G …5n1h2txyewy\ShellExperienceHost.exe N/A |
| 0 N/A N/A 11240 C+G …on\wallpaper_engine\wallpaper32.exe N/A |
| 0 N/A N/A 11824 C+G …_8wekyb3d8bbwe\Microsoft.Photos.exe N/A |
| 0 N/A N/A 12368 C+G …oogle\Chrome\Application\chrome.exe N/A |
±--------------------------------------------------------------------------------------+

We are still there :D

The weird thing for me is that your secondary gpu consumes the same amount of power as the one used as the main gpu. In my and also in your case, it consumes more!
I mean its a server but usually on desktop the second gpu sits around 3-5 watts while being idle. There is absolutely no load on that gpu and it’s parked. Nothing really explains this behaviour. If you ran a home server, all of this waster power could be 50 watts+ over 10 gpus. Not even counting that the main gpu is typically on a lower wattage on a desktop system.

That’s not even in the same order of magnitude as the original problem I reported here.

I had two monitors and my power draw NEVER dropped below 100W. Now I idle in the 35-40 range.

I can’t believe that this issue has been going on for so many years and still hasn’t been resolved (Linux 2080ti).

Use this to reduce power usage at the desktop.

I can’t believe that I was drawing 117W from the wall last 2 years 12-16 hours a day for nothing. Unbelievable!

I have 2x 4k 60Hz and 1x 5120x1440 240Hz connected to RTX3090 and to add more fun, lower refresh rate on Odyssey G9 doesn’t always mean less power draw:
240Hz - 117W
120Hz - 50W
60Hz - 117W

Funny enough I have second GPU (rtx 3060) to help to run screen wall with another 3x 4K 60Hz screens and this second GPU is ilding 13W!

Just wow

Again, I’m the original reporter, and the current situation is nowhere near what I reported back over a year ago.

I have 2 1440p 165Hz monitors and at idle (but still running at 165Hz) they only draw 30-50W. Even right now with a GPU-composited and accelerated browser open I’m at 49W.

Make sure all yall’s powermizer settings in the nvidia control panel are set to Auto and not Prefer maximum performance.

I’ve provided a workaround. Stop resurrecting this old thread because you can’t be bothered to try it.