If you have GPU clock boost problems, please try __GL_ExperimentalPerfStrategy=1

On an EVGA 3080 Ti FTW3 Ultra Hybrid, powermizer is stuck at the highest level and never drops to the lower levels when more than 2 monitors are connected. __GL_ExperimentalPerfStrategy=1 does not make any difference. With 2x1440p monitors running at 60hz, it does drop to the lowest power level. When I either bump one of the two monitors up to 144hz or add a third monitor at 60hz, it gets stuck at the highest level. When my monitors go into standby, it does clock down to the lowest level which I can see when I SSH into the machine and run nvidia-smi. My previous card, a Zotac 3080 amp holo, was able to clock down to the lowest power level with four 1440p monitors running (3 @ 75hz, 1 @ 144hz). Changing the power limit and/or clock offsets doesn’t make any difference.

nvidia-smi command below shows it stuck at P0 and using 87w even when usage is 0-1%. When I drop to 1 or 2 monitors, powermizer starts working and it idles at 28w and 32c. nvidia-bug-report file is attached.

$ echo $__GL_ExperimentalPerfStrategy 
1

$ nvidia-smi dmon
# gpu   pwr gtemp mtemp    sm   mem   enc   dec  mclk  pclk
# Idx     W     C     C     %     %     %     %   MHz   MHz
    0    87    45     -     0     1     0     0  9501   210
    0    87    45     -     0     1     0     0  9501   210
    0    87    45     -     0     1     0     0  9501   210
    0    87    45     -     1     1     0     0  9501   210
    0    87    45     -     1     1     0     0  9501   210
    0    87    45     -     1     1     0     0  9501   210
    0    87    45     -     1     1     0     0  9501   210
    0    87    45     -     1     1     0     0  9501   210
    0    87    45     -     1     1     0     0  9501   210
    0    87    45     -     1     1     0     0  9501   210
    0    87    45     -     1     1     0     0  9501   210
    0    87    45     -     1     1     0     0  9501   210
    0    87    45     -     1     1     0     0  9501   210
    0    87    45     -     1     1     0     0  9501   210
    0    87    45     -     1     1     0     0  9501   210
    0    87    45     -     1     1     0     0  9501   210
    0    87    45     -     1     1     0     0  9501   210
    0    87    45     -     1     1     0     0  9501   210
    0    87    45     -     1     1     0     0  9501   210
    0    87    45     -     1     1     0     0  9501   210
    0    87    45     -     1     1     0     0  9501   210
    0    87    45     -     1     1     0     0  9501   210
    0    87    45     -     1     1     0     0  9501   210
    0    87    45     -     1     1     0     0  9501   210
    0    87    45     -     1     1     0     0  9501   210
    0    87    45     -     1     1     0     0  9501   210
    0    87    45     -     0     1     0     0  9501   210
    0    87    45     -     0     1     0     0  9501   210
    0    87    45     -     1     1     0     0  9501   210
    0    87    45     -     1     1     0     0  9501   210
    0    87    45     -     1     1     0     0  9501   210
    0    87    45     -     1     1     0     0  9501   210
    0    87    45     -     1     1     0     0  9501   210
    0    87    45     -     0     1     0     0  9501   210
    0    87    45     -     0     1     0     0  9501   210
    0    87    45     -     0     1     0     0  9501   210
    0    87    45     -     0     1     0     0  9501   210
    0    87    45     -     0     1     0     0  9501   210
    0    87    45     -     1     1     0     0  9501   210
    0    87    45     -     1     1     0     0  9501   210
    0    87    45     -     0     1     0     0  9501   210
    0    87    45     -     0     1     0     0  9501   210
    0    87    45     -     0     1     0     0  9501   210
    0    87    45     -     0     1     0     0  9501   210

nvidia-bug-report.log.gz (516.8 KB)