I’m running Fedora 20 x86_64. Recently Fedora upgraded their kernel to 3.14. I was using the drivers supplied by RPMFusion. The current version that they supply is 331.67.
After the upgrade, I noticed a significant performance hit on a game that I was playing that does not exist with the kernel 3.13.x. So… I decided to try to download and install the lastest stable version from Nvidia… version 334.21 which would not compile properly. So… I tried to install the latest Beta version and it did install correctly and is running on my system. However, I am still seeing the performance hit on the 3.14.x kernel. When running under 3.13.x kernel things seem to be fine.
I am willing to, as best as my skills will allow to help resolve the issue.
I’ve been beating my head with this for the last week, and am now convinced that this is not a NVidia driver issue per se but, a problem with the CPU stepping governor as it may relate to kernel 3.14.
I am continueing to report on this issue in case someone else might run across it because of a similar issue.
The game I’m playing with the performance issue that I mentioned is Tera. I am playing it under wine.
The developers of the game decided to get the CPU to do much of the rendering. So I decided to investigate that angle.
My preliminary conclusion based on the results of the following actions
Adding:
Option “RegistryDwords” “PowerMizerEnable=0x1; PerfLevelSrc=0x2222; PowerMizerDefaultAC=0x1”
to my xorg.conf
Setting my bios to have the CPU at full frequency.
Enabling and starting the cpupower service
Installed, configured, enabled, and started the tuned.service with:
a) active_profile set to latency-performance
b) dynamic_tuning is disabled
Performance with the 3.14 kernel series seem much much smoother and on par with the kernel 3.13 series. I will probably try playing with tuned a bit more. But I still have some “testing” to do with the current tweaks.
I will report anything significant that I might run across.
There are a lot of known performance issues for games due to Intel power governors, with both the old style cpufreq driver and the newer pstate stuff for Ivy Bridge, etc. Forcing highest frequency is trivial for cpufreq but I’m not familiar with the method for pstates.
I think Ubuntu disables pstates by default due to the issues.
It’s not clear that Intel really knows what they’re doing here.
I’m seeing the same performance regression on a number of Xeon workstations I have @home and @work.
Seems that in 3.14 Feodra switched Sandy Bridge and Ivy Bridge Xeons from ACPI/ondemand to p-state/performance.
Before I revert world + dog back to ACPI ondemand, anyone seeing this performance regression w/ ACPI/ondemand and not p-state/performance?
(E.g. cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_driver)