Driver-level antialiasing not really usable for composited LInux desktops?

Hi folks,

I’m relatively new to nvidia-linux (I’d used nvidia cards under Windows in the past, but my Linux experience for the past several years has all been Intel Mesa) and have been getting used to the various tweaks exposed through nvidia-settings (coolbits, etc).

One thing I’ve noticed is that setting system-wide MSAA or FXAA for OpenGL applications doesn’t really seem to work in practice; because most modern Linux desktops are technically OpenGL applications, enabling MSAA or FXAA results in heavily antialised text (as though desktop applications render to textures, which I think is actually the default behaviour in compiz), which is often illegible, and in any case completely unnecessary.

I wouldn’t bother with the system-wide settings at all if not for the fact that many modern Linux titles don’t seem to have their own internally defined MSAA or FXAA options for some reason, as though they’re expecting you to use system-wide settings instead.

Am I missing something? Thanks!

Can you set an application profile to disable MSAA / FXAA for compiz?

That or the other way around, leaving global settings on default and creating application profiles for the games that lack those settings.

You can read about application profiles here

There are already application profile rules in recent drivers to disable G-SYNC for most compositors, so it might make sense to disable antialiasing there too. I’ll run the idea by my coworkers.

[Edit: this is tracked in bug 1659555]

Thanks, Aaron!

I’d love it if you could also look into the issue of PowerMizer being fixed to the highest performance setting whenever 3 or more displays are connected – I tried reporting the bug but didn’t get much of a response (reference number 150601-000644):

Hi axfelix,

The PowerMizer thing isn’t actually a bug: it needs to lock the GPU to a high enough performance state so that there is enough memory bandwidth available to drive all of the displays.

Hey Aaron,

Sorry to derail this thread, but I still think the implementation could be more efficient – in my case I’m running a Titan X, and I’d be very surprised if going from two displays to three necessitated going from powermizer state 0 to state 3, as is the current behavior. Surely it’d be enough to change the rule for very powerful GPUs so that instead of automatically locking the highest state, it instead just blocks the lowest one, so that for e.g. the Titan X, it’s only allowed to drop down into state 1 rather than 0 when three displays are connected? That would still be an enormous improvement in terms of power consumption over forcing state 3, and it’d offer significantly increased memory bandwidth over state 0. It seems like the powermizer code generally is pretty all-or-nothing at the moment; when it drops down from state 3 after exiting a game, it cycles down through states 2 and 1 in a few seconds going back to 0, but I’ve never observed it actually using 1 or 2 for an extended period of time.

It’s a little more complicated than that. In order to change the memory clocks, the driver has to pause the memory interface, re-train the links, and then turn it on again. Depending on the configuration, it might not have enough time to do that before the display underflows. So if that situation arises, it just locks it to the highest speed to avoid glitching the displays.

Ah, well – thanks for the detailed explanation, anyhow! Really appreciate you being so accessible here; I’ll give up pushing the issue :)

Hi axfelix,

I’m a member of the Linux driver team that has been assigned to this bug.

Could you provide me with an nvidia-bug-report.log with and without the antialiasing overrides that produced the problem? Instructions on doing so can be found here: .

Additionally, please provide me with screenshots with and without the overrides. If the issue appears on the screen but not in the screenshots, please also let me know.

Offline mail reply from axfelix when asked for repro steps:

Will do. I can’t restart the X server on the machine that I can replicate the issue on right now, but I’ll get to this when I can.