RTX A2000 Power Limit Randomly Capped at 40%

Hi all, recently got my hands on an NVIDIA RTX A2000 (6gb model) and have been having a hard time with diagnosing a performance issue. Basically, at random intervals, the GPU appears to have a power limit set to about 40%.

I’ll normally be playing a game without issue (2560x1440p at 100hz, no problems) and then the performance drop occurs, limiting frames to about 10fps regardless of the game. I’ve checked HWINFO which shows the % TDP reaching no more than 40% and a performance power limitation being set for whatever reason. The issue will persist for a while, and will eventually reset. Temps themselves appear to be okay (memory junction is in the 80s, maybe that is an issue?). I’ve tested with the most recent drivers (528.24) in Windows 11 and tested the card in Manjaro also. Issue persists in both operating systems.

Could this possibly be driver related, or hardware related? If driver related, is this a feature that can be modified? I’ve set the card to ‘Prefer Maximum Performance’ and that doesn’t appear to have any effect.

Please let me know if there is any other information I can provide to help troubleshoot this issue.

Hi @turboman300,

A disclaimer to start with: The RTX A2000 is a workstation card for compute workloads. This is not a gaming GPU.

The card has a TDP of 70W, which means it will try to run at different temps than your Gaming RTX. Also the cooling solution is designed for small server or multi-GPU BIG desktop setups with active cooling.
I can’t check myself, but if you run nvidia-smi you can check the temp limits on the card. You can tweak those or try tuning the voltage settings, but beyond that it is hard to analyse.

There is no artificial limitation in the driver.

That said, it might very well be a HW defect. So if you bought this card new, you might want to ask for an RMA.

Hope that helps.

Thanks Markus, I appreciate the reply.

While I mainly intend to use the card for CAD workloads in a small form-factor desktop, I had also hoped to use the card for gaming on occasion. I’ve heard of others who’ve successfully used the card for gaming without issue, so I was hoping to accomplish the same thing.

When I was testing the card in an open-air testing bench I had on hand, the temps seemed well within their limits upon checking nvidia-smi. I was using GPU-Z and HWINFO to determine what exact performance limiters we’re being set on the card when the performance issues occurred, and they were stating power limits were being applied although the card itself was only pulling about 25 watts (~40% TDP). Since I wasn’t able to narrow down the issue to anything software related, I assumed the card itself was faulty (it was pre-owned, so it seemed rather likely this was the case). I’m in the process of returning the card and will look into obtaining another. Hopefully, I won’t run into the same issue, but if I do, I’ll attempt further troubleshooting and document what I can here.

1 Like

Thank you! Any documentation is very welcome and I am watching this thread in case we can help with additional trouble-shooting.

Good luck!

Did , changing the card fix the issue? I have a similar issue where it is only running around 25watt , and will not go above. I am using a riser cable, and wondering if that could be a contributing factor.

I actually was just able to get my hands on another card recently, and I’ve run into the same problem. I too am using a riser cable, but I believe I remember previously testing with the original card I had using both a riser cable and seated directly in the PCIe slot and the issue still occurred. I believe it is a temperature problem, as I was able to 3D print a shroud to fit a 92mm fan to better cool the card which appears to have resolved the down-clocking in most scenarios. I never accounted for thermals being an issue with this card because when troubleshooting the issue originally none of the temperature sensors on the card reached the max temperature for the card (which I believe nvidia-smi reported as being ~85C) during gaming.