Performance Issue With 337.19


I’m running Fedora 20 x86_64. Recently Fedora upgraded their kernel to 3.14. I was using the drivers supplied by RPMFusion. The current version that they supply is 331.67.

After the upgrade, I noticed a significant performance hit on a game that I was playing that does not exist with the kernel 3.13.x. So… I decided to try to download and install the lastest stable version from Nvidia… version 334.21 which would not compile properly. So… I tried to install the latest Beta version and it did install correctly and is running on my system. However, I am still seeing the performance hit on the 3.14.x kernel. When running under 3.13.x kernel things seem to be fine.

I am willing to, as best as my skills will allow to help resolve the issue.

nvidia-bug-report.log.gz (197 KB)

I have also opened up a bug report at:

I forgot to mention… Currently I am running 337.19 Beta

I’ve been beating my head with this for the last week, and am now convinced that this is not a NVidia driver issue per se but, a problem with the CPU stepping governor as it may relate to kernel 3.14.

I am continueing to report on this issue in case someone else might run across it because of a similar issue.

The game I’m playing with the performance issue that I mentioned is Tera. I am playing it under wine.

The developers of the game decided to get the CPU to do much of the rendering. So I decided to investigate that angle.

My preliminary conclusion based on the results of the following actions

  1. Adding:
    Option “RegistryDwords” “PowerMizerEnable=0x1; PerfLevelSrc=0x2222; PowerMizerDefaultAC=0x1”
    to my xorg.conf

  2. Setting my bios to have the CPU at full frequency.

  3. Enabling and starting the cpupower service

  4. Installed, configured, enabled, and started the tuned.service with:
    a) active_profile set to latency-performance
    b) dynamic_tuning is disabled

Performance with the 3.14 kernel series seem much much smoother and on par with the kernel 3.13 series. I will probably try playing with tuned a bit more. But I still have some “testing” to do with the current tweaks.

I will report anything significant that I might run across.

Oh… I did forget to mention… I have reinstalled the current stable drivers as supplied by rpmfusion.

There are a lot of known performance issues for games due to Intel power governors, with both the old style cpufreq driver and the newer pstate stuff for Ivy Bridge, etc. Forcing highest frequency is trivial for cpufreq but I’m not familiar with the method for pstates.

I think Ubuntu disables pstates by default due to the issues.

It’s not clear that Intel really knows what they’re doing here.

cpufreq-set -g performance

works even with pstates

I’m seeing the same performance regression on a number of Xeon workstations I have @home and @work.
Seems that in 3.14 Feodra switched Sandy Bridge and Ivy Bridge Xeons from ACPI/ondemand to p-state/performance.
Before I revert world + dog back to ACPI ondemand, anyone seeing this performance regression w/ ACPI/ondemand and not p-state/performance?
(E.g. cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_driver)


Same issue here, Ubuntu 14.04, intel i7 3630QM, nvidia 740M GT. I’m using Dota 2 with Bumblebee.

-stock install : linux 3.13, nvidia 331.xx
Dota 2 gives me ~60fps, nvidia-setting shows 85°C

-xorg-edgers upgrades : linux 3.15, nvidia 337.xx
Dota 2 gives me ~30fps with high variability, 95°C

I haven’t identified the cause of the issue, but there is definitely something going on here.