My previous question did not get any response regarding this, however it may be a problem with the Nvidia drivers and supplied firmware that’s limiting the power draw of laptop RTX 3060 GPUs to 80W maximum.
So an important question to ask is if ANYONE on ANY laptop has been able to get a laptop 3060 GPU to draw more than 80W on linux with the proprietary drivers? Please share here, thanks.
Thanks for the links @generix, it could be an issue with platform power profiles.
But to exclude an issue with nvidia drivers/supplied firmware (some kind of software limit) we need confirmation from someone that they were able to run an RTX 3060 mobile GPU at over 80W at all. Any leads here would help immensely.
You’ll notice a lot of Notify (NPCF, 0xC0) calls in DSDT. I believe these are all unhandled because there’s no driver listening for them (i.e. the equivalent of nvpcf.sys on Windows) and power limits don’t change like they should.
Also note
> cat /proc/driver/nvidia/gpus/0000:01:00.0/power
Runtime D3 status: Enabled (fine-grained)
Video Memory: Active
GPU Hardware Support:
Video Memory Self Refresh: Supported
Video Memory Off: Supported
Power Limits:
Default: 80000 milliwatts
GPU Boost: 4294967295 milliwatts
I’m not sure how the “Power Limits” values are populated but it could be something hardcoded in to the drivers.
I own Legion 5 Pro 16ACH6H with RTX3060. From my basic observations, these power levels are preconfigured in customized VBIOS (Lenovo), default target value is set to exactly 80W, as shows picture below. And I can easily predict, that this problem with RTX3000 GPUs running at low TGP, affects whole Lenovo Legion series…
OK, if under Windows is everything working as expected (assuming that NVDA0820 driver responsible for power switching is bundled only inside Nvidia Drivers for Win10, and inside Linux version is missing), is there any method (reverse engineering) how to find what exactly NVDA0820 driver is doing ?
Someone should try using GWE to slide the power limit slider up to raise the power limit.
Because this doesn’t sound like a driver or Linux issue at all, it sounds like intended behavior.
Every single 3060 Mobile GPU I can find has a stock power limit of 80W. Most of them have a MAX power limit of 80W, and so they can’t be set to go higher on Linux or Windows. But I have found a few where the Power Limit Default is 80W and Max is 90, 95, or in one or two cases, 110 or 115.
But of course you don’t get over that in Linux, because you’re not raising the power limit. You have to raise the power limit (on any OS) in order to go over that default 80W.
This is the same for desktop GPUs. Go look at the TechPowerUp page for the EVGA XC3 Ultra RTX 3090. It’s Power Limit is Default 350W and it’s Max PL is 366W. It won’t go over 350W on Windows OR Linux, but on Windows AND Linux I can raise the PL to exactly 366W using Afterburner/X1 on Windows and GreenWithEnvy on Linux.
The only difference here is that these laptops will have custom vendor hardware control utilities with different “power profiles,” and those power profiles just raise the default limit.
If you go into GWE, I would be almost certain that you will be able to raise your default power limit if that mobile GPU has a Max PL above 80W. You will be able to raise it to exactly what it says the Board Power Limit is on TechPowerUp.
But for those of you who have the majority of models with 80W Max PLs, there’s nothing you can do and it’s functioning as intended, and doesn’t go over 80W on Windows either.
My Legion 5 Pro is having the same symptoms as yours i tried GWE way and it doesnt let you change Power Limits dumped and decompiled dsdt tables there are too many methods in and im kinda new to this and lost in it. Please Generix help us reach to nvidia developers to save us from Microsoft BS.
These problems comes with new Ampere GPUs and their feature of “customizable TGP” in some specific range like 80-115W for Base, but you know one fixed value at Max TDP will be easy solution…
I can imagine why configurable TGP would be useful for laptops etc. but we just want feature parity with Windows drivers here. Really need some kind of response from Nvidia devs regarding this.
Regarding reverse engineering, I’m sure it’s doable, but we need more people with the right skill set to 1) be aware of the problem 2) put in the effort to fix it, and Nvidia is best positioned to resolve the issue for the growing number of Linux users.
Before lenovo recovery formatted all of my ssds i had DSDT tables decompiled on my system did anyone examined DSDT tables so we can blind shot all methods in there to maybe find the one supposed to change GPU TDP?