310 performance boost ?

Hi,
The advertised performance boost (doubling or so) is not visible to me, at least on my GT 610.
There is some speedup like

Unigine:
Sanctuary: 38 → 42 fps
Tropics: 13 → 15 fps

…from 304.60 to 310.14 but I wouldn’t call this dramatic.
Moreover, I see some definite slowdown with some Cg demo applications (cgfx_bumpdemo2 and cgfx_interfaces)

Isn’t the speedup applicable to my card’s architecture ?

read the PR bullshizzle again! it’s a 2x speed up in Left4Dead2 only (and maybe in the other Source based engines too).

Could easily happen.
On [url]http://www.phoronix.com/scan.php?page=news_item&px=MTE1Njc[/url] they wrote already in August that l4d2 is faster on Linux. So it may be because of the Source engine differences.

Does anyone from nVidia dare to comment ?

Apparently there are some improvements: [url]http://www.phoronix.com/scan.php?page=article&item=nvidia_r310_linux&num=1[/url] but I have yet to test them myself. Maybe today,

Yeah, the “double performance” claim is a bit optimistic. You should test with the threaded optimisations option, though, you’ll get a pretty substantial performance gain in some games. But be aware that other games will actually result in a performance loss. It depends on the game, and likely your hardware as well.

I agree, will retest as soon as I can. This would be the way as I see from the doc:

[i]Threaded Optimizations

The NVIDIA OpenGL driver supports offloading its CPU computation to a worker thread. These optimizations typically benefit CPU-intensive applications, but might cause a decrease of performance in applications that heavily rely on synchronous OpenGL calls such as glGet*. Because of this, they are currently disabled by default.

Setting the __GL_THREADED_OPTIMIZATIONS environment variable to “1” before loading the NVIDIA OpenGL driver library will enable these optimizations for the lifetime of the application.[/i]

(Although the Phoronix tests were run without this, but they are far from the 2x increase in most cases. Well and still no one commented the Cg demo framerate decrease.)

Hi! Since gaming under Linux will become more and more interesting (thinking of the STEAM port) I thought to take a closer look to the performance difference between Windows 7 and Linux.
My System is a intel Dualcore @ 3 GHz, 8GB Ram and an GeForce 460GTX. Windows 7 64 bit (all updates installed) and the 310.33 drivers installed. On the same machine I got an openSUSE Linux 12.2 64 bit and the 310.14 driver modules running.

I tested gzdoom (an openGL port of the Doom sources) with exactly the same settings. I compiled the latest gzdoom sources svn 1466 by myself under Windows and Linux for beeing sure to get exactly the same test basis.

The result is a big difference by the openGL performance. At windows I got about 380 fps @ 1920x1080 and Linux it is about 214 fps. Here I uploaded the two screenshots as a comparision between the two systems (upper right corner you see the fps output). As you can see by the shots, the video settings are the same and both systems deliver at least the same picture quality.

[url]http://screenshotcomparison.com/comparison/157098[/url]

For me there is still a big difference between 3D performance under Windows and Linux. Hopefully you can do something about it in the future, so real hardcore gamers will become more interested in running games on Linux than on Windows. Since I am a big Linux fans for now about 20 years I really would welcome it.

cu
Gargi

Gargi, thanx for the good description.
Have you tried with the multithreaded OpenGL settings on Linux ?

Well I did. (AthlonX2 3600, 32 bit 3.6.2 kernel, GT 610 card)
See the CPU load screenshots:
I’ve run the Unigine Sanctuary :http://screenshotcomparison.com/comparison/157209
and Tropics http://screenshotcomparison.com/comparison/157212 benchmark.

When I set the OpenGL multithreading, I always got

  • higher CPU usage, particularly kernel load
  • much lower framerate (15 vs 9, 42 vs 27 frame/s)

As the description in the docs says, some applications may benefit from MT, some not. Unigine is positioned as a professional engine and I think it is, however it runs much slower with the experimental MT. For me this means that it’s not trivial to benefit from this new feature. It’s a good idea to have it disabled by default.

I just tried it but I’m not sure if I did it correctly, since there was absolutely no difference. I tried it with gzdoom (the Doom openGL port), ioquake3 (fork of Quake 3 Arena, openGL) and Darkplaces Quake (fork of the Quake sources).
I just started the games for fps messuring e.g.

nvidia-settings --assign=“SyncToVBlank=0”
LD_PRELOAD=“libpthread.so.0 libGL.so.1” __GL_THREADED_OPTIMIZATIONS=1 darkplaces-glx

What is the correct way to enable the threaded optimizations?

I think, that these rather old games doesn’t utilize it.

Edit: Here the comparision shot of Darkplaces Quake in HD and hi-res textures. As you can see, Linux also slower as Windows (same system, 310.x drivers) FPS is shown on the lower right corner.

[url]http://screenshotcomparison.com/comparison/157227[/url]

But it looks as good on Linux as it does under Windows ;)

Edit 2: To complete this here a compare using the 64bit version of ioquake 3 (compiled of the svn 2350 sources).
[url]http://screenshotcomparison.com/comparison/157229[/url]
There is a difference in picture quality (Linux version looks more crisp although both using the same aniso filterings …) But the Windows version is almost double as fast as the Linux one …

cu
Gargi

Unigine makes heavy use of synchronous GL operations that don’t interact well with the threaded optimizations. In my testing, the applications that benefited most from it were Source engine games, and id Tech 4 games to a lesser extent (Doom 3, Quake 4).

Gargi, what desktop environment were you using for your testing on Linux?

Hi!
I’m using KDE 4.8.5 (4.8.5) “release 2”. Desktop effects are disabled while fullscreen aps are running.

Edit: I installed Gnome too and did the same tests. Performance is as it is under KDE. So I guess the environment doesn’t make a big difference (on openSUSE, I don’t know if it is the same on Ubuntu).

I also checked the id Tech 4 engine too (Doom3). Performance seems to be closer to the Windows version, but there I got another strange behavior while looking at the framerate. It is very quickly pending between 40 and 60 fps even if you stand absolutely still. That results in microstuttering while walking around. Better under Linux as under Windows is using the hi-res texture mod. (sikkmod + wulfen texture pack). Under windows I got a kind of hangs (for a second) opening doors. I guess that’s why new textures where loaded and the hi-res ones are very big. Under Linux I don’t get these short hangs. Do you know why that it is under Windows and not under Linux?

cu
Gargi

Well, the 310.19 is interesting: (32 bit, 3.6.2 kernel)
In the Unigine Sanctuary I got substantial speedup: from 41.9 to 70.7 FPS.

I don’t get similar speedup though in the other Unigine benchmarks (Tropics and Heaven).

Also, the Cg demo performance is back to the 304 driver level (i.e. increased compared to 310.14)

Gargi,

About your ioquake 3 comparison - it looks like the Linux version of this engine runs with Anisotropic Filtering enabled and something else while the Windows version has these extras disabled.

I don’t know about KDE, but at least modern Gnome3 seems to render every window to the display buffer, even if it’s not visible. That means that every background application is eating GFX resources. I suggest running your games either in gnome3 compatibility/fallback mode or XFCE to see if there is a huge difference in frame rate for your games. These modern compositing window managers seem to be rather buggy and resource intensive, so it will be hard to get a fair comparison.