GTX 680's on Linux running slower than they should?

10 months ago it was reported that the GTX 680’s were reporting their frequencies incorrectly or indeed perhaps running slower than they should - [url]http://www.phoronix.com/scan.php?page=news_item&px=MTA4ODc[/url]

As of today:
nvidia-settings - [url]欧美日本免费在线视频_欧美色视频日本片免费
Heaven 4.0 - [url]欧美日本免费在线视频_欧美色视频日本片免费

Any news on this? Is it a reporting error? Are the cards running slower than they should be?

see my post on 680m, I believe it is the case as heaven benchmarks report the same speed and FPS plus GPU temp is less then in windows, I am going to rerun a single monitor test on heaven 4 today, but I think i also have ‘on battery power’ issues (nvidia driver/settings showing battery as source). I know my GPU fan never seems to hit the super fast speed.

mine shows 324 as speed all the time, memory up to 1800.

Well I’d like to know if it is indeed just a reporting error how Nvidia have verified it, what process/tool is used to measure the clock speed and why said tool cannot be implemented into the drivers and/or released to put our minds at rest.

As it stands now, we only have their word and these words are very carefully phrased, from the Phoronix article I linked to in my original post…

At this point, I believe what you are seeing is strictly due to nvidia-settings reporting deficiencies, rather than the driver not taking proper advantage of the GPU clocks.

I think what is getting reported in nvidia-settings (both in the PowerMizer page and GPU3DClockFreqs) is the minimum value of each range.

You can be confident that the clock is certainly not running below what is reported in nvidia-settings, and it is likely running above that.

It doesn’t inspire confidence, just infers some shady goings on.

on mine i believe its slower, but i have the additional laptop environment where nvidia-settings powermizer is convinced its on battery but is not.

are you able to bench with heaen 4 on windows and linux on same hardware to show an FPS difference? mine was nearly fps different running opengl and tesselation off in both OS’es.

As written in my other thread, I am pretty sure, that despite official statements by NVIDIA, the Linux drivers make the card run slower.

GTX 660 Linux 3:11

GTX 660 Windows 0:44

fyi - I see the same issue for my 660ti. I ran the Unigine Valley benchmark for linux & winxp:

Unigine Valley Benchmark 1.0
FPS: 35.0 Score: 1464 Min FPS: 16.9 Max FPS: 66.0
SystemPlatform: Linux 2.6.37.6-24-desktop i686
CPU model: Intel(R) Core™2 Duo CPU E6850 @ 3.00GHz (2999MHz) x2
GPU model: GeForce GTX 660 Ti PCI Express 313.18 (2048MB) x1
Settings Render: OpenGL Mode: 1920x1200 4xAA fullscreen Preset Custom Quality High

FPS: 33.1 Score: 1387 Min FPS: 15.6 Max FPS: 54.7
System Platform: Windows XP (build 2600, Service Pack 3) 32bit
Settings Render: OpenGL Mode: 1920x1200 4xAA fullscreen Preset Custom Quality High

FPS: 34.8 Score: 1455 Min FPS: 14.1 Max FPS: 73.1
System Platform: Windows XP (build 2600, Service Pack 3) 32bit
Settings Render: Direct3D9 Mode: 1920x1200 4xAA fullscreen Preset Custom Quality High

So seems OPENGL is equal … but if this is the same for CUDA (as Roaster sent above I cant check)

Is there any official statement on the issue with GTX 660 and GTX 680 clock speeds?

I found another user complaining about the performance using Linux: [url]http://blenderartists.org/forum/showthread.php?239480-2-61-Cycles-render-benchmark&p=2311942&viewfull=1#post2311942[/url]