Optimus on Ubuntu 18.04 is a step backwards ... but I found the first good solution

You can’t blame the NVIDIA driver if the card isn’t powering down, because prime select intel now removes the NVIDIA driver and rebuilds the initramfs. This is supposed to enable nouveau to allow the kernel to power off the card (the devs have stopped using the old way, I don’t know why.)
It’s not working very well, which is a shame because it’s an enormous price to pay. Firstly it’s now very slow to switch. Secondly, this method mandates a reboot. The old way didn’t need a reboot except for a bug in logind. Now even if that bug is fixed, prime select will still need a reboot.
To my amazement, Mint switches without a reboot.

Okay, I managed to get power usage back down to 12W on my laptop when in iGPU mode by removing the “nouveau.runpm=0” kernel parameter.

Remaining problems:

  • Suspend doesn't work.
  • Tearing on second monitor (am using options nvidia-drm modeset=1).

Modeset=1 enables prime sync which only helps for tearing on displays connected to the Intel card. Tearing on any other screen is an NVIDIA problem. Try enabling pipeline composition (via nvidia-settings GUI for instance)

Any idea how to enable “pipeline composition”? It’s available on my desktop, but not on my laptop. On my laptop, the screens (in the “X Server Display Configuration” section of the Nvidia Settings app) are kind of grayed out and have “PRIME Display” written on them. When I click the “Advanced” button, no options are displayed for “pipeline composition”.

There’s also no /etc/X11/xorg.conf file, so it’s not possible to add to the “Screen” section:

Option "metamodes" "nvidia-auto-select +0+0 {ForceCompositionPipeline=On}"

Can I set this “ForceCompositionPipeline=On” using a command in terminal?

If you have more questions on this topic you should take it to the prime sync thread. This thread is supposed to be about the changes of ubuntu 18.04 to the prime select scripts

However, here is an answer.

If you are using a recent version of nvidia-settings, the gui lets you set pipeline composition via X Server Display Configuration, click the nvidia-managed screen(s) showing tearing, and click Advanced. Older versions of nvidia-settings don’t have this option even when the driver can do it, but I don’t know when it appeared.

You can’t enable pipeline composition on the laptop screen though, only on displays connected to the nvidia card. The non-nvidia screens show “PRIME”. I assume you have at least one screen which is not PRIME.
Only fairly recent versions of the gui settings app show Prime screens (previously they were just not shown at all in the settings gui), so it seems you have a modern version, and I think you should find pipeline composition in your Advanced settings page.

If you have tearing on those PRIME displays (but there is probably only one such display, the laptop’s panel), then prime sync is not working. It’t the job of prime sync to stop tearing on the prime output display(s). pipeline composition and prime sync both solve tearing but the screens they work on are mutually exclusive and they are quite different (prime sync is really just a type of vsync, pipeline composition is something else)

here is a script

#!/bin/bash
nvidia-settings --assign CurrentMetaMode="$(nvidia-settings -q CurrentMetaMode -t|tr '\n' ' '|sed -e 's/.*:: \(.*\)/\n/g' -e 's/}/, ForceCompositionPipeline = On}/g')" > /dev/null
~

FWIW I’ve found that every initial LTS version of Ubuntu and Ubuntu-based Linux Mint released over the past few years has had problems.

IMHO the first (or 2nd) point release is worth waiting for and even then is only worth installing (on a spare HDD/SSD for testing) if it offers a reasonable prospect of resolving a lingering incompatibility which cannot be remedied via a kernel upgrade applied to a more mature and stable OS release.

As for Ubuntu 18.04 LTS in particular, it hung on reboot and did not recognize the Asus XG-C100C (Aquantia AQC107, atlantic driver) in my rig.

What’s more, IME the performance of the Linux 4.15.0-xx kernel (which is also used in Ubuntu 18.04) is noticeably inferior to that of Linux 4.15.14 or 4.15.18 (which appears to be the newest kernel that works with Linux Mint 18.3 MATE and thus, in all likelihood, Ubuntu 16.04.4).

EDIT; I was mistaken:

Apr 2, 2018
How to Install Linux Kernel 4.16 on Ubuntu 17.10 and Ubuntu 16.04 LTS
https://news.softpedia.com/news/how-to-install-linux-kernel-4-16-on-ubuntu-17-10-and-ubuntu-16-04-lts-520514.shtml

For my system (Gigabyte AERO 15x v8 with GTX 1070 MaxQ), the new prime-select method using the nouveau module and vgaswitcheroo to power off the nvidia card when in intel mode doesn’t actually power off the card and the battery life tanks correspondingly. I had to revert to bbswitch and manually blacklist nvidia and nouveau modules when I want to be in the intel only mode, bbswitch at least powers off the card properly.

The issue with bbswitch I have is that if I power off the card through bbswitch before starting X, the system just freezes during X startup. This seems to be ACPI related issue on this particular system. No combination of acpi_osi kernel parameters seems to work for me on this system, perhaps a new BIOS release will fix it but I’m not holding my hope up. So for now, I have to boot with both cards powered on while having nvidia and nouveau blacklisted and then power off the nvidia card via bbswitch after login.

Need to remove nvidia blacklist, run update-initramfs -u and reboot to get back to nvidia. It’s a major PITA to have to do that every time, takes forever.

Apparently the systemd/logind bug with keeping the drm handle open forever is fixed in 238 but 18.04 is currently on 237 so we are probably stuck with this initramfs-based method until they upgrade systemd, which might not actually happen in 18.04 at all.

It shouldn’t be necessary to un-blacklist the nvidia modules, that only keeps it from getting autoloaded. It can still be manually loaded using modprobe, so creating a systemd service that’s running before displaymanager start should be possible. Would make the initrd rebuild unnecessary.

consider commenting on https://bugs.launchpad.net/ubuntu/+source/nvidia-prime/+bug/1765363

That’s true, so this way I could reduce the round-trip to just enabling/disabling the “modprobe” service via systemctl and a system reboot. And with systemd 238, which hopefully fixes the stuck card handle issue I will not even have to reboot. Sounds almost too good to be true :-)

Next step would be to get prime render offloading like in Windows, but now I’m dreaming…

Done.

Also have several issues, mostly “freeze on suspend” and “high power drain (15W) on Intel graphics”; not switching of GPU. Using a new Dell XPS 15 with 1050 and fresh Ubuntu 18.04 install.

Is it safe to say that the purchase of laptops with Nvidia GPU is not recommended when you need to run Ubuntu LTS?

the original title of this thread included the word “trainwreck”, but that was a bit harsh I thought. It is hard to imagine that such a radical redesign was planned for an LTS, it may reflect the large amount of magic behind the scenes to get Optimus working, which became unsustainable for reasons unknown. I have not found a working port of the ubuntu approach in arch or fedora, although there are attempts, so it’s not easy.

There is surely some hope: Ubuntu’s support of Optimus has been better than anyone else because some people aimed to make it good, and those developers are still working on it.

bumblebee has a lot of fans, but it a solution that simply ignores multi-monitor users. Hopefully the Ubuntu/Debian devs can rework or reconsider the nvidia-prime module.
File bug reports and let Alberto know of the issues. Personally, I’ve had enough. Optimus was unavoidable with quad-core laptops, that’s not true any longer and as of Friday this becomes an academic issue for me. Until I made that decision, I went from 17.10 to Mint 18.3 which has great Optimus support based on Ubuntu but no restart needed when changing. My idea was that this would buy enough time for Ubuntu to revert to bbswitch.

I follow your line of thought. But I am tempted to put the major blame on the Dell + Nvidia combo as they clearly don’t test the XPS15 against Lunix. I am quite sure that with relatively minimal effort they could have provided standard methods for other OS’s than Windows to run smoothly…

Hesitant to tell myself the truth: until Nvidia visibly steps out of the shadow and starts educating the hardware manufacturers, it seems to best avoid Nvidia at all if you want to run Lunix on laptops…

Here is a good solution according to my testing

https://github.com/matthieugras/Prime-Ubuntu-18.04
I’ve added two pull requests, including a one line change to a file which I needed to make it work.
You’ll need rust, which you install from 18.04 repositories or the recommended rust way (both work).

Also, his script is hard-coded to assume you are using lightdm which given the poor history of gdm and nvidia modeset=1 probably makes sense.

It does what it says: you can change modes in seconds, no reboot, no initramfs rebuild. On my Thinkpad P50 (quadro M1000M, ubuntu 18.04) it works fine; the nvidia card is really powered off.
It seems this works due to the advancements in the ubuntu nvidia package supporting multi-dispatch.
His code runs a little server in the background which kills the display manager, makes very minimal changes, and restarts lightdm.
The user-facing script is a modified version of prime-select, so you just do

sudo prime-select intel|nvidia

It kills the display manager instantly, not gracefully.

It uses bbswitch.

I might have not bought a pure-intel Thinkpad if I’d seen this earlier :)

I’m posting here since prime-select from bbswitch package above (as linked and updated by TimRichardson) didn’t work on my Lenovo P50. Tim, thanks a lot for your work on this - I installed your fork from timrichardson/Prime-Ubuntu-18.04, merged your branches and installed the result (without a problem), double checked that my prime-select is now the one that I just built, and tried sudo prime-select intel. After a few seconds it then kicks me out into TTY - I’m then able to do startx and make it work for a while. However on next reboot I’m stuck in TTY again, and this time startx won’t work. I attached nvidia-bug-report.sh output in that condition. Also of interest is Xorg.0.log after the startx failure:

[   158.715] 
X.Org X Server 1.19.6
Release Date: 2017-12-20
[   158.715] X Protocol Version 11, Revision 0
[   158.715] Build Operating System: Linux 4.4.0-119-generic x86_64 Ubuntu
[   158.715] Current Operating System: Linux porty4 4.15.0-22-generic #24-Ubuntu SMP Wed May 16 12:15:17 UTC 2018 x86_64
[   158.715] Kernel command line: BOOT_IMAGE=/vmlinuz-4.15.0-22-generic root=/dev/mapper/ubuntu--vg-root ro acpi_backlight=vendor
[   158.715] Build Date: 13 April 2018  08:07:36PM
[   158.715] xorg-server 2:1.19.6-1ubuntu4 (For technical support please see http://www.ubuntu.com/support) 
[   158.715] Current version of pixman: 0.34.0
[   158.715] 	Before reporting problems, check http://wiki.x.org

	to make sure that you have the latest version.
[   158.715] Markers: (--) probed, (**) from config file, (==) default setting,
	(++) from command line, (!!) notice, (II) informational,
	(WW) warning, (EE) error, (NI) not implemented, (??) unknown.
[   158.715] (==) Log file: "/home/arkamax/.local/share/xorg/Xorg.0.log", Time: Tue Jun  5 10:52:22 2018
[   158.715] (==) Using system config directory "/usr/share/X11/xorg.conf.d"
[   158.716] (==) No Layout section.  Using the first Screen section.
[   158.716] (==) No screen section available. Using defaults.
[   158.716] (**) |-->Screen "Default Screen Section" (0)
[   158.716] (**) |   |-->Monitor "<default monitor>"
[   158.716] (==) No device specified for screen "Default Screen Section".
	Using the first device section listed.
[   158.716] (**) |   |-->Device "Intel Graphics"
[   158.716] (==) No monitor specified for screen "Default Screen Section".
	Using a default monitor configuration.
[   158.716] (==) Automatically adding devices
[   158.716] (==) Automatically enabling devices
[   158.716] (==) Automatically adding GPU devices
[   158.716] (==) Automatically binding GPU devices
[   158.716] (==) Max clients allowed: 256, resource mask: 0x1fffff
[   158.716] (WW) The directory "/usr/share/fonts/X11/cyrillic" does not exist.
[   158.716] 	Entry deleted from font path.
[   158.716] (WW) The directory "/usr/share/fonts/X11/100dpi/" does not exist.
[   158.716] 	Entry deleted from font path.
[   158.716] (WW) The directory "/usr/share/fonts/X11/75dpi/" does not exist.
[   158.716] 	Entry deleted from font path.
[   158.716] (WW) The directory "/usr/share/fonts/X11/100dpi" does not exist.
[   158.716] 	Entry deleted from font path.
[   158.716] (WW) The directory "/usr/share/fonts/X11/75dpi" does not exist.
[   158.716] 	Entry deleted from font path.
[   158.716] (==) FontPath set to:
	/usr/share/fonts/X11/misc,
	/usr/share/fonts/X11/Type1,
	built-ins
[   158.716] (==) ModulePath set to "/usr/lib/xorg/modules"
[   158.716] (II) The server relies on udev to provide the list of input devices.
	If no devices become available, reconfigure udev or disable AutoAddDevices.
[   158.716] (II) Loader magic: 0x55c6f2313020
[   158.716] (II) Module ABI versions:
[   158.716] 	X.Org ANSI C Emulation: 0.4
[   158.716] 	X.Org Video Driver: 23.0
[   158.716] 	X.Org XInput driver : 24.1
[   158.716] 	X.Org Server Extension : 10.0
[   158.717] (++) using VT number 2

[   158.718] (II) systemd-logind: took control of session /org/freedesktop/login1/session/_33
[   158.719] (II) xfree86: Adding drm device (/dev/dri/card0)
[   158.719] (II) systemd-logind: got fd for /dev/dri/card0 226:0 fd 11 paused 0
[   158.719] (II) xfree86: Adding drm device (/dev/dri/card1)
[   158.720] (II) systemd-logind: got fd for /dev/dri/card1 226:1 fd 12 paused 0
[   158.720] (**) OutputClass "nvidia" ModulePath extended to "/usr/lib/x86_64-linux-gnu/nvidia/xorg,/usr/lib/xorg/modules"
[   158.720] (**) OutputClass "Nvidia Prime" ModulePath extended to "/x86_64-linux-gnu/nvidia/xorg,/usr/lib/x86_64-linux-gnu/nvidia/xorg,/usr/lib/xorg/modules"
[   158.720] (**) OutputClass "Nvidia Prime" setting /dev/dri/card0 as PrimaryGPU
[   158.721] (--) PCI: (0:0:2:0) 8086:191d:17aa:222e rev 6, Mem @ 0xd2000000/16777216, 0x60000000/536870912, I/O @ 0x00005000/64, BIOS @ 0x????????/131072
[   158.721] (--) PCI:*(0:1:0:0) 10de:13b0:17aa:222e rev 162, Mem @ 0xd3000000/16777216, 0xc0000000/268435456, 0xd0000000/33554432, I/O @ 0x00004000/128, BIOS @ 0x????????/524288
[   158.721] (II) LoadModule: "glx"
[   158.721] (II) Loading /usr/lib/x86_64-linux-gnu/nvidia/xorg/libglx.so
[   158.724] (II) Module glx: vendor="NVIDIA Corporation"
[   158.724] 	compiled for 4.0.2, module version = 1.0.0
[   158.724] 	Module class: X.Org Server Extension
[   158.724] (II) NVIDIA GLX Module  390.48  Wed Mar 21 23:42:56 PDT 2018
[   158.724] (II) LoadModule: "intel"
[   158.724] (II) Loading /usr/lib/xorg/modules/drivers/intel_drv.so
[   158.724] (II) Module intel: vendor="X.Org Foundation"
[   158.724] 	compiled for 1.19.5, module version = 2.99.917
[   158.724] 	Module class: X.Org Video Driver
[   158.724] 	ABI class: X.Org Video Driver, version 23.0
[   158.724] (II) intel: Driver for Intel(R) Integrated Graphics Chipsets:
	i810, i810-dc100, i810e, i815, i830M, 845G, 854, 852GM/855GM, 865G,
	915G, E7221 (i915), 915GM, 945G, 945GM, 945GME, Pineview GM,
	Pineview G, 965G, G35, 965Q, 946GZ, 965GM, 965GME/GLE, G33, Q35, Q33,
	GM45, 4 Series, G45/G43, Q45/Q43, G41, B43
[   158.725] (II) intel: Driver for Intel(R) HD Graphics
[   158.725] (II) intel: Driver for Intel(R) Iris(TM) Graphics
[   158.725] (II) intel: Driver for Intel(R) Iris(TM) Pro Graphics
[   158.725] xf86EnableIOPorts: failed to set IOPL for I/O (Operation not permitted)
[   158.725] (II) intel(G0): Using Kernel Mode Setting driver: i915, version 1.6.0 20171023
[   158.725] (II) intel(G0): SNA compiled: xserver-xorg-video-intel 2:2.99.917+git20171229-1 (Timo Aaltonen <tjaalton@debian.org>)
[   158.725] (II) intel(G0): SNA compiled for use with valgrind
[   158.745] (EE) No devices detected.
[   158.745] (II) Applying OutputClass "nvidia" to /dev/dri/card0
[   158.745] 	loading driver: nvidia
[   158.745] (II) Applying OutputClass "Nvidia Prime" to /dev/dri/card0
[   158.745] 	loading driver: nvidia
[   158.745] (==) Matched nvidia as autoconfigured driver 0
[   158.745] (==) Matched nvidia as autoconfigured driver 1
[   158.745] (==) Matched nouveau as autoconfigured driver 2
[   158.745] (==) Matched nouveau as autoconfigured driver 3
[   158.745] (==) Matched modesetting as autoconfigured driver 4
[   158.745] (==) Matched fbdev as autoconfigured driver 5
[   158.745] (==) Matched vesa as autoconfigured driver 6
[   158.745] (==) Assigned the driver to the xf86ConfigLayout
[   158.745] (II) LoadModule: "nvidia"
[   158.745] (II) Loading /usr/lib/x86_64-linux-gnu/nvidia/xorg/nvidia_drv.so
[   158.746] (II) Module nvidia: vendor="NVIDIA Corporation"
[   158.746] 	compiled for 4.0.2, module version = 1.0.0
[   158.746] 	Module class: X.Org Video Driver
[   158.746] (II) LoadModule: "nouveau"
[   158.746] (II) Loading /usr/lib/xorg/modules/drivers/nouveau_drv.so
[   158.747] (II) Module nouveau: vendor="X.Org Foundation"
[   158.747] 	compiled for 1.19.3, module version = 1.0.15
[   158.747] 	Module class: X.Org Video Driver
[   158.747] 	ABI class: X.Org Video Driver, version 23.0
[   158.747] (II) LoadModule: "modesetting"
[   158.747] (II) Loading /usr/lib/xorg/modules/drivers/modesetting_drv.so
[   158.747] (II) Module modesetting: vendor="X.Org Foundation"
[   158.747] 	compiled for 1.19.6, module version = 1.19.6
[   158.747] 	Module class: X.Org Video Driver
[   158.747] 	ABI class: X.Org Video Driver, version 23.0
[   158.747] (II) LoadModule: "fbdev"
[   158.747] (II) Loading /usr/lib/xorg/modules/drivers/fbdev_drv.so
[   158.748] (II) Module fbdev: vendor="X.Org Foundation"
[   158.748] 	compiled for 1.19.3, module version = 0.4.4
[   158.748] 	Module class: X.Org Video Driver
[   158.748] 	ABI class: X.Org Video Driver, version 23.0
[   158.748] (II) LoadModule: "vesa"
[   158.748] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so
[   158.748] (II) Module vesa: vendor="X.Org Foundation"
[   158.748] 	compiled for 1.19.3, module version = 2.3.4
[   158.748] 	Module class: X.Org Video Driver
[   158.748] 	ABI class: X.Org Video Driver, version 23.0
[   158.748] (II) intel: Driver for Intel(R) Integrated Graphics Chipsets:
	i810, i810-dc100, i810e, i815, i830M, 845G, 854, 852GM/855GM, 865G,
	915G, E7221 (i915), 915GM, 945G, 945GM, 945GME, Pineview GM,
	Pineview G, 965G, G35, 965Q, 946GZ, 965GM, 965GME/GLE, G33, Q35, Q33,
	GM45, 4 Series, G45/G43, Q45/Q43, G41, B43
[   158.749] (II) intel: Driver for Intel(R) HD Graphics
[   158.749] (II) intel: Driver for Intel(R) Iris(TM) Graphics
[   158.749] (II) intel: Driver for Intel(R) Iris(TM) Pro Graphics
[   158.749] (II) NVIDIA dlloader X Driver  390.48  Wed Mar 21 23:18:15 PDT 2018
[   158.749] (II) NVIDIA Unified Driver for all Supported NVIDIA GPUs
[   158.749] (II) NOUVEAU driver Date:   Fri Apr 21 14:41:17 2017 -0400
[   158.749] (II) NOUVEAU driver for NVIDIA chipset families :
[   158.749] 	RIVA TNT        (NV04)
[   158.749] 	RIVA TNT2       (NV05)
[   158.749] 	GeForce 256     (NV10)
[   158.749] 	GeForce 2       (NV11, NV15)
[   158.749] 	GeForce 4MX     (NV17, NV18)
[   158.749] 	GeForce 3       (NV20)
[   158.749] 	GeForce 4Ti     (NV25, NV28)
[   158.749] 	GeForce FX      (NV3x)
[   158.749] 	GeForce 6       (NV4x)
[   158.749] 	GeForce 7       (G7x)
[   158.749] 	GeForce 8       (G8x)
[   158.749] 	GeForce GTX 200 (NVA0)
[   158.749] 	GeForce GTX 400 (NVC0)
[   158.749] (II) modesetting: Driver for Modesetting Kernel Drivers: kms
[   158.749] (II) FBDEV: driver for framebuffer: fbdev
[   158.749] (II) VESA: driver for VESA chipsets: vesa
[   158.749] xf86EnableIOPorts: failed to set IOPL for I/O (Operation not permitted)
[   158.750] (WW) Falling back to old probe method for modesetting
[   158.750] (WW) Falling back to old probe method for fbdev
[   158.750] (WW) Falling back to old probe method for vesa
[   158.750] (WW) Falling back to old probe method for modesetting
[   158.750] (WW) Falling back to old probe method for fbdev
[   158.750] (WW) Falling back to old probe method for vesa
[   158.750] (II) systemd-logind: releasing fd for 226:0
[   158.815] (EE) [drm] Failed to open DRM device for (null): -2
[   158.839] (II) modeset(G1): using drv /dev/dri/card0
[   158.839] (EE) No devices detected.
[   158.839] (EE) 
Fatal server error:
[   158.839] (EE) no screens found(EE) 
[   158.839] (EE) 
Please consult the The X.Org Foundation support 
	 at http://wiki.x.org

 for help. 
[   158.839] (EE) Please also check the log file at "/home/arkamax/.local/share/xorg/Xorg.0.log" for additional information.
[   158.839] (EE) 
[   158.854] (EE) Server terminated with error (1). Closing log file.

There are also a few entries of “xf86EnableIOPorts: failed to set IOPL for I/O (Operation not permitted)” - not sure what this refers to. Looks like nouveau driver is loaded (which I was told is the legacy behavior) - but I’m fairly positive it was blacklisted:

$ cat /etc/modprobe.d/nvidia-graphics-drivers.conf 
blacklist nouveau
blacklist lbm-nouveau
alias nouveau off
alias lbm-nouveau off

If I do sudo prime-select nvidia at that time, I can “startx”, but it’s stuck at login prompt loop. Rebooting brings things back to normal, but I’m not using Intel card at that point.

Of note, make install on the Prime-Select package above built an X config file for my laptop LCD, but instead of 3840x2160 (native resolution) it has 1920x1080:

Section "Monitor"
    Identifier     "eDP1"
    Modeline "1920x1080_60.02"  173.00  1920 2048 2248 2576  1080 1083 1088 1120 -hsync +vsync
    Option "PreferredMode" "1920x1080_60.02"
EndSection

That’s not the biggest problem though and I can fix it by hand I think, just thought I’d mention it.

Feels like I’m almost there but not quite :|
nvidia-bug-report.log.gz (121 KB)

Please remove the files
/usr/share/X11/xorg.conf.d/11-nvidia-prime.conf
/usr/share/X11/xorg.conf.d/10-nvidia.conf
and retry.
BTW, the nvidia-bug-report.log you’re attaching is always the same from the beginning. Delete the old one before creating a new one.

Moved those files away, rebooted, switched to intel - same thing. I then created a brand new bug report after removing the previous one. Interestingly, until I put those files back, I could not boot into startx even after sudo prime-select nvidia - only after putting them back I could reboot into prime-selected nvidia X session.
nvidia-bug-report.log.gz (129 KB)

It looks like the prime-socket service is not running or failing, it doesn’t unload the nvidia driver. Please check if it is running
sudo systemctl status prime-socket.
I don’t know where it outputs its errors, maybe check journal
sudo journalctl -b 0

TBH, that solution has a lot of flaws. It looks like it only unloads the nvidia modules on switching, but not on boot. The service nvidia-prime-boot to turn off the nvidia gpu is never enabled. Instead of stopping and starting lightdm, a more general approach would be to restart display-manager, which would also be required for the nvidia-prime-boot service to run if it were enabled.