see my post on 680m, I believe it is the case as heaven benchmarks report the same speed and FPS plus GPU temp is less then in windows, I am going to rerun a single monitor test on heaven 4 today, but I think i also have ‘on battery power’ issues (nvidia driver/settings showing battery as source). I know my GPU fan never seems to hit the super fast speed.
mine shows 324 as speed all the time, memory up to 1800.
Well I’d like to know if it is indeed just a reporting error how Nvidia have verified it, what process/tool is used to measure the clock speed and why said tool cannot be implemented into the drivers and/or released to put our minds at rest.
As it stands now, we only have their word and these words are very carefully phrased, from the Phoronix article I linked to in my original post…
At this point, I believe what you are seeing is strictly due to nvidia-settings reporting deficiencies, rather than the driver not taking proper advantage of the GPU clocks.
I think what is getting reported in nvidia-settings (both in the PowerMizer page and GPU3DClockFreqs) is the minimum value of each range.
You can be confident that the clock is certainly not running below what is reported in nvidia-settings, and it is likely running above that.
It doesn’t inspire confidence, just infers some shady goings on.
on mine i believe its slower, but i have the additional laptop environment where nvidia-settings powermizer is convinced its on battery but is not.
are you able to bench with heaen 4 on windows and linux on same hardware to show an FPS difference? mine was nearly fps different running opengl and tesselation off in both OS’es.
fyi - I see the same issue for my 660ti. I ran the Unigine Valley benchmark for linux & winxp:
Unigine Valley Benchmark 1.0 FPS: 35.0 Score: 1464 Min FPS: 16.9 Max FPS: 66.0
SystemPlatform: Linux 2.6.37.6-24-desktop i686
CPU model: Intel(R) Core™2 Duo CPU E6850 @ 3.00GHz (2999MHz) x2
GPU model: GeForce GTX 660 Ti PCI Express 313.18 (2048MB) x1
Settings Render: OpenGL Mode: 1920x1200 4xAA fullscreen Preset Custom Quality High
FPS: 33.1 Score: 1387 Min FPS: 15.6 Max FPS: 54.7
System Platform: Windows XP (build 2600, Service Pack 3) 32bit
Settings Render: OpenGL Mode: 1920x1200 4xAA fullscreen Preset Custom Quality High
FPS: 34.8 Score: 1455 Min FPS: 14.1 Max FPS: 73.1
System Platform: Windows XP (build 2600, Service Pack 3) 32bit
Settings Render: Direct3D9 Mode: 1920x1200 4xAA fullscreen Preset Custom Quality High
So seems OPENGL is equal … but if this is the same for CUDA (as Roaster sent above I cant check)