Lenovo ThinkPad P52 with modesetting and nvidia not working and xf86-video-intel with bumblebee and ...

Hello everyone,

I’m trying to get my Lenovo ThinkPad P52 (with NVIDIA QUADRO P1000 Mobile and Intel UHD Graphics 630) working using modesetting and the nvidia driver.

I’m currently using my Lenovo ThinkPad P52 with a Lenovo Thunderbolt Dock Gen 2 in combination with 2 external displays connected to DisplayPort and the xf86-video-intel driver (git version from a few weeks ago) using the intel-virtual-output utility, which was working fine.

I’m trying to migrate to the xf86-video-modesetting driver as the other xf86-video-intel is buggy and deprecated (I’m currently constantly switching between the 2 as I can only use the xf86-video-intel driver with external displays and the modesetting with the internal laptop screen, as the xf86-video-intel is giving me the regular graphical glitches and display corruption, as where the modesetting driver isn’t.

Unfortunately I’m having trouble getting this configuration to work with the modesetting driver.

I’m currently running kernel 5.3.11,xorg-server-1.20.6 (which contains all the commits for PRIME Render Offload functionality) and nvidia-drivers-440.31 with the latest firmware available for the laptop (I believe it’s 1.31). The Intel GPU is enabled as primary GPU (I can change the setting in the BIOS, but would like to use the iGPU due to power savings).

One other thing to note s that the external display outputs, are hardwired to the NVIDIA Graphics Card.

I’ve tried following the instructions according to Chapter 33. Offloading Graphics Display with RandR 1.4 and Chapter 34. PRIME Render Offload and the iGPU is working fine, but am unable to either use the nvidia graphics card or use the external displays attached to the Thunderbolt docking station when using modesetting.

I’ve tried adding nvidia-drm.modeset=1 to /etc/default/grub and regenerating my grub config file, but it doesn’t change anything, aside from a slight delay during boot and a few messages in dmesg regarding atomic mode failed to set. I’ve tried blacklisting bbswitch too, but that doesn’t resolve anything either and adding IgnoreABI didn’t either.

Could there be some other / additional configuration that I need?

My .conf xorg-server configuration file:

Section “ServerFlags”
Option “IgnoreABI” “1”
EndSection

Section “ServerLayout”
Identifier “layout”
Screen 0 “iGPU”
Option “AllowNVIDIAGPUScreens”
EndSection

Section “Device”
Identifier “iGPU”
Driver “modesetting”
BusID “PCI:00:02:00”
EndSection

Section “Screen”
Identifier “iGPU”
Device “iGPU”
EndSection

Section “Device”
Identifier “nvidia”
Driver “nvidia”
BusID "PCI:01:00:00
Endsection

After examining the Xorg.0.log it seems the displays are detected, but for some reason the desktop is not extended automatically and the screens are not detected in gnome? Running xandr --setprovideroutputsource shows modesetting and NVIDIA-G0, but for some reasone when I set both, I’m receiving an error

[ 18.145] ABI class: X.Org ANSI C Emulation, version 0.4
[ 18.145] (==) NVIDIA(G0): Depth 24, (==) framebuffer bpp 32
[ 18.145] (==) NVIDIA(G0): RGB weight 888
[ 18.145] (==) NVIDIA(G0): Default visual is TrueColor
[ 18.145] (==) NVIDIA(G0): Using gamma correction (1.0, 1.0, 1.0)
[ 18.146] () Option “AllowNVIDIAGpuScreens”
[ 18.146] (
) NVIDIA(G0): Enabling 2D acceleration
[ 18.146] (II) Loading sub module “glxserver_nvidia”
[ 18.146] (II) LoadModule: “glxserver_nvidia”
[ 18.146] (II) Loading /usr/lib64/xorg/modules/extensions/libglxserver_nvidia.so
[ 18.163] (II) Module glxserver_nvidia: vendor=“NVIDIA Corporation”
[ 18.163] compiled for 1.6.99.901, module version = 1.0.0
[ 18.163] Module class: X.Org Server Extension
[ 18.163] (II) NVIDIA GLX Module 440.31 Sun Oct 27 02:14:20 UTC 2019
[ 18.164] (II) NVIDIA: The X server supports PRIME Render Offload.
[ 18.594] (–) NVIDIA(0): Valid display device(s) on GPU-0 at PCI:1:0:0
[ 18.594] (–) NVIDIA(0): DFP-0.2
[ 18.594] (–) NVIDIA(0): DFP-0.3
[ 18.594] (–) NVIDIA(0): DFP-0 (boot)
[ 18.594] (–) NVIDIA(0): DFP-1
[ 18.594] (–) NVIDIA(0): DFP-2
[ 18.594] (–) NVIDIA(0): DFP-3
[ 18.594] (–) NVIDIA(0): DFP-4
[ 18.595] (II) NVIDIA(G0): NVIDIA GPU Quadro P1000 (GP107GL-A) at PCI:1:0:0 (GPU-0)
[ 18.595] (–) NVIDIA(G0): Memory: 4194304 kBytes
[ 18.595] (–) NVIDIA(G0): VideoBIOS: 86.07.63.00.4a
[ 18.595] (II) NVIDIA(G0): Detected PCI Express Link width: 16X
[ 18.595] (–) NVIDIA(GPU-0): Samsung SyncMaster (DFP-0.2): connected
[ 18.595] (–) NVIDIA(GPU-0): Samsung SyncMaster (DFP-0.2): Internal DisplayPort
[ 18.595] (–) NVIDIA(GPU-0): Samsung SyncMaster (DFP-0.2): 1440.0 MHz maximum pixel clock
[ 18.595] (–) NVIDIA(GPU-0):
[ 18.595] (–) NVIDIA(GPU-0): Samsung SyncMaster (DFP-0.3): connected
[ 18.595] (–) NVIDIA(GPU-0): Samsung SyncMaster (DFP-0.3): Internal DisplayPort
[ 18.595] (–) NVIDIA(GPU-0): Samsung SyncMaster (DFP-0.3): 1440.0 MHz maximum pixel clock
[ 18.595] (–) NVIDIA(GPU-0):
[ 18.595] (–) NVIDIA(GPU-0): DFP-0: disconnected
[ 18.595] (–) NVIDIA(GPU-0): DFP-0: Internal DisplayPort
[ 18.595] (–) NVIDIA(GPU-0): DFP-0: 1440.0 MHz maximum pixel clock
[ 18.595] (–) NVIDIA(GPU-0):
[ 18.595] (–) NVIDIA(GPU-0): DFP-1: disconnected
[ 18.595] (–) NVIDIA(GPU-0): DFP-1: Internal DisplayPort
[ 18.595] (–) NVIDIA(GPU-0): DFP-1: 1440.0 MHz maximum pixel clock
[ 18.595] (–) NVIDIA(GPU-0):
[ 18.595] (–) NVIDIA(GPU-0): DFP-2: disconnected
[ 18.595] (–) NVIDIA(GPU-0): DFP-2: Internal TMDS
[ 18.595] (–) NVIDIA(GPU-0): DFP-2: 165.0 MHz maximum pixel clock
[ 18.595] (–) NVIDIA(GPU-0):
[ 18.595] (–) NVIDIA(GPU-0): DFP-3: disconnected
[ 18.595] (–) NVIDIA(GPU-0): DFP-3: Internal DisplayPort
[ 18.595] (–) NVIDIA(GPU-0): DFP-3: 1440.0 MHz maximum pixel clock
[ 18.595] (–) NVIDIA(GPU-0):
[ 18.595] (–) NVIDIA(GPU-0): DFP-4: disconnected
[ 18.595] (–) NVIDIA(GPU-0): DFP-4: Internal TMDS
[ 18.595] (–) NVIDIA(GPU-0): DFP-4: 165.0 MHz maximum pixel clock
[ 18.595] (–) NVIDIA(GPU-0):
[ 18.607] (II) NVIDIA(G0): Validated MetaModes:
[ 18.607] (II) NVIDIA(G0): “NULL”
[ 18.607] (**) NVIDIA(G0): Virtual screen size configured to be 1920 x 1080
[ 18.607] (WW) NVIDIA(G0): Unable to get display device for DPI computation.
[ 18.607] (==) NVIDIA(G0): DPI set to (75, 75); computed from built-in default
[ 18.652] (==) modeset(0): Backing store enabled
[ 18.652] (==) modeset(0): Silken mouse enabled
[ 18.713] (II) modeset(0): Initializing kms color map for depth 24, 8 bpc.
[ 18.713] (==) modeset(0): DPMS enabled
[ 18.713] (II) modeset(0): [DRI2] Setup complete
[ 18.713] (II) modeset(0): [DRI2] DRI driver: i965
[ 18.713] (II) modeset(0): [DRI2] VDPAU driver: i965
[ 18.714] (II) NVIDIA: Using 24576.00 MB of virtual memory for indirect memory
[ 18.714] (II) NVIDIA: access.
[ 18.793] (II) NVIDIA(G0): Setting mode “NULL”
[ 18.799] (==) NVIDIA(G0): Disabling shared memory pixmaps
[ 18.799] (==) NVIDIA(G0): Backing store enabled
[ 18.799] (==) NVIDIA(G0): Silken mouse enabled
[ 18.800] (==) NVIDIA(G0): DPMS enabled
[ 18.800] (II) Loading sub module “dri2”
[ 18.800] (II) LoadModule: “dri2”
[ 18.800] (II) Module “dri2” already built-in
[ 18.800] (II) NVIDIA(G0): [DRI2] Setup complete
[ 18.800] (II) NVIDIA(G0): [DRI2] VDPAU driver: nvidia

In render offload mode, the external monitors connected to the nvidia gpu are not accessible. To use them, you’ll have to switch to a prime output profile which is using the nvidia gpu to render.

Dear generix,

I’m not sure whether I full understand what you mean, as I can’t find any reference to a Prime Output profile in the NVIDIA documentation: NVIDIA Accelerated Linux Graphics Driver README and Installation Guide

Do you mean I’ll have to use the dGPU as primary GPU effectively disabling my iGPU in the BIOS / UEFI configuration menu (as it’s still possible on this laptop, as it has a multiplexer)? Because that’s exactly what I’m trying to avoid, due to the excessive power consumption and using the laptop on battery on a regular basis. This is what I’m doing right now using bumblebee with xf86-video-intel and intel-virtual-output, but this doesn’t work with the modesetting driver.

I would like my iGPU to be enable primarily and enable my dGPU only when an external monitor is connected or I’m telling it to render on the dGPU, exactly as what xf86-video-intel with intel-virtual-output is doing, but then by using the modesetting driver as the xf86-video-intel gives me graphical corruption from time to time, where the modesetting driver isn’t.

You don’t need to switch to nvidia only in bios, you can switch profiles in OS, depending on distribution. I don’t know which distro you’re currently using, so I can’t tell you the proper procedure.
I was just telling about the limitations
igpu only / igpu + render offload → no external monitors (on the nvidia gpu)
external monitors: dgpu renders everything, igpu displays = prime output:http://us.download.nvidia.com/XFree86/Linux-x86/319.12/README/randr14.html
The equivalent to using intel-virtual-output+bumblebee would be “reverse prime”, igpu renders everything,dgpu just displays, doesn’t work yet with the nvidia driver since it doesn’t implement the prime output sink capability.
Which distribution are you running?

Generix,

I’m using Gentoo with a manual compiled (all required options selected) kernel (gentoo-sources) version 5.3.11 with a few extra patches and systemd-232 with xorg-server-1.20.6 and Gnome 3.30 and wayland disabled (xorg session).

What I’d like t accomplish is the following (if possible; or hopefully in the near future)

  • Use the Intel HD Graphics for display (power savings) by default with the modesetting driver (as it’s more stable)
  • Use the NVIDIA Quadro P1000 for display and / or rendering when external monitors are connected or when it’s connected to my thunderbolt dock
  • Use the NVIDIA Quadro P1000 for rendering when launching 3D intensive applications / performing compute operationsjavascript:void();

Right now it works (sort of) using xf86-video-intel with intel-virtual-output, bumblebee and optirun. I’d just like to get rid of the xf86-video-intel driver, due to the fact that the stability / crashes and it has been bothering me as of late. I’ve been checking the git and updating the driver to the latest git frequently, hoping that it’d become more stable, but that’s not the case unfortunately.

Since Gentoo is DIY, there’s no ready-to-go ebuild for it, you’ll have to…DIY. I presume you’re still on eselect-opengl instead of libglvnd.
Try this:
remove bumblebee and the intel driver but let bbswitch stay.
create /etc/X11/xorg.conf.d/11-nvidia-prime.conf

Section "OutputClass"
    Identifier "nvidia"
    MatchDriver "nvidia-drm"
    Driver "nvidia"
    Option "AllowEmptyInitialConfiguration"
    Option "PrimaryGPU" "yes"
EndSection

for GDM/Gnome, create two files optimus.desktop in /etc/xdg/autostart/ and /usr/share/gdm/greeter/autostart/ containing

[Desktop Entry]
Type=Application
Name=Optimus
Exec=sh -c "xrandr --setprovideroutputsource modesetting NVIDIA-0; xrandr --auto"
NoDisplay=true
X-GNOME-Autostart-Phase=DisplayServer

This should enable usage of prime output, external monitors accessible.

To be able to switch back to igpu only for power saving, create
/etc/systemd/system/disablenvidia.service

[Unit]
Description=Disable Nvidia GPU
Before=display-manager.service

[Service]
Type=oneshot
ExecStart=/etc/X11/optimus/disablenvidia.sh

[Install]
WantedBy=display-manager.service

and /etc/X11/optimus/disablenvidia.sh

#!/bin/bash
modprobe -r nvidia
modprobe bbswitch
echo "OFF" >/proc/acpi/bbswitch
logger Nvidia OFF

Then you can use scripts to switch

setintel.sh

#!/bin/sh
eselect opengl set xorg-x11
systemctl enable disablenvidia
echo -e "blacklist nvidia\nblacklist nvidia-drm\nblacklist nvidia-modeset\nblacklist nvidia-uvm" >/etc/modprobe.d/nvidia-blacklist.conf

and reboot.

setnvidia.sh

#!/bin/sh
eselect opengl set nvidia
systemctl disable disablenvidia
rm /etc/modprobe.d/nvidia-blacklist.conf

and reboot.

You can put them in one script using switch case.

Generix,

Thank you for your help and creative thinking, I really appreciate it, this is a step forward in the direction I want to go :-)

I’m still using eselect-opengl indeed. The option for using libglvnd is still blacklisted (though I could unmask it from the blacklist) by default on gentoo indeed.

I was already working on a few bash scripts myself already indeed, but then based on detection by performing a check on presence based on a udev rule, whether the thunderbolt docking station is present and automatically assume nvidia and working on a separate solution when a user is logged in and when a screen is and an external screen is detected in the Xorg.0.log that it should switch to nvidia, but then it wouldn’t switch automatically without logging back out and in or restarting X, which I was still figuring out. That was a hassle :D

I was actually considering to write a daemon or script (and sharing it with the community) that starts before X and detects the UDEV rule whether a docking station is present and automatically assume nvidia and have it detect when a screen is detected before X is started and automatically assume nvidia too, since when the nvidia module is loaded, it should automatically detect screens (even when hotplugged), the only exceptions that when I start my laptop and plug-in a screen later the screen wouldn’t work, but that’s the case now too (though I can manually start intel-virtual-output currently to resolve that issue).

Is there any way to detect connected displays before starting X and going through the xorg.0.log? For example by querying the nvidia-drm driver somehow (as I know on my main computer once nvidia-drm is initiated I get a cloned screen on both of my monitors in native resolution on my NVIDIA card at home or do you know whether anything is registered to D-BUS or UDEV? Cause I’d be able to use that too, though I’d have to modprobe the module and log back- out and back in order to use nvidia, or am I thinking too complicated? :-)

I’m limited to Bash / Perl / PHP / Python / PowerHell though.

Best would be if NVIDIA would support it properly, as it would resolve all the issues, but got to to do it with what I’ve got I guess.

You can enable kms using kernel parameter nvidia-drm.modeset=1 or the corresponding modprobe module option file, this creates nodes in /sys/class/drm like card1-HDMI-A-1 containing the file ‘status’ which contains ‘connected’ in case a monitor is connected.
check that in a script running before disablenvidia to autoswitch.

Generix,

I was aware of the nvidia-drm.modeset=1 grub parameter which I’m using at home and experimenting with on my laptop, but wasn’t aware it created nodes in case a monitor is connected and that’s exactly what I was looking for, as I can simply cat towards it, grep it and detect monitors this way.

Thank you for all the help and assistance.

For connecting monitors to the running system, you might use an acpid triggered script. Though you still have to restart (at least X) which is a limitation of the Xserver, not the driver.