The all new OutputSink feature aka reverse PRIME

My laptop resolution is 1920x1080. The resolution of my monitors is 1920x1200 and I’ve a black horizontal bar almost on the bottom (but not completely on the bottom) of my external screens when the resolution of the external monitors is set to 1920x1200. Below that is like a little bit of wallpaper, which is visible, but the area is unusable. So the black bar in gnome is not completely on the bottom, as it’s supposed to be (using the Window List extension in Gnome). It looks the Window List extension is exactly set at 1920x1080, because the amount of pixels that are missing, is approximately the leftover.

I see there is another tracking issue in another post as 3063041.

Can you give a status update on these? I’ve been running a small discord server for people with ASUS gaming laptops and a lot of people are very excited by the possibilities this will enable.

I’m experiencing a serious issue with reverse PRIME: I cannot turn it off. It may seem strange, but it is true. I followed this guide to set it up. The only difference is that I use the following configuration:

Section "ServerLayout"
        Identifier "layout"
        Screen 0 "intel"
        Inactive "nvidia"
        Option "AllowNVIDIAGPUScreens"
EndSection

Section "Device"
        Identifier "intel"
        Driver "modesetting"
        BusID "PCI:0:2:0"
        Option "DRI" "3"
EndSection

Section "Screen"
        Identifier "intel"
        Device "intel"
EndSection

Section "Device"
        Identifier "nvidia"
        Driver "nvidia"
        BusID "PCI:1:0:0"
EndSection

Section "Screen"
        Identifier "nvidia"
        Device "nvidia"
EndSection

Then I ran xrandr --setprovideroutputsource NVIDIA-G0 modesetting and xrandr --auto. And all was well, because the outputs of the dGPU showed up in the output of xrandr. And I was indeed able to connect an external monitor, and it worked, apart from the fact that using GNOME, if the external monitor is positioned above the internal one, then it acts as if both monitors were just a single one: for example, making an app full screen causes it to span both monitors.

xrandr --listproviders reports:

Providers: number : 2
Provider 0: id: 0x43 cap: 0xf, Source Output, Sink Output, Source Offload, Sink Offload crtcs: 3 outputs: 1 associated providers: 1 name:modesetting
Provider 1: id: 0x26e cap: 0x2, Sink Output crtcs: 4 outputs: 5 associated providers: 1 name:NVIDIA-G0

Unfortunately, I cannot turn it off. I tried:

  • restarting X
  • rebooting
  • downgrading the drivers to 440.100 and then upgrading
  • disabling optimus-manager
  • manually starting X
  • disabling D3 power management
  • both 450.57 and 450.66 series drivers
  • the latest version of X from git

and combinations of those. The setting persists.

NVIDIA driver version: 450.66
XOrg server version: 1.20.9-1 (Manjaro)
XOrg ANSI C Emulation: 0.4
XOrg Video Driver: 24.1
XOrg XInput driver : 24.1
XOrg Server Extension : 10.0
Linux kernel version: 5.4.60-2-MANJARO
DE: GNOME 3.36.5
DM: GDM

When an external display is connected (HDMI), and I run xrandr --setprovideroutputsource NVIDIA-G0 0x0, then the external display turns off, so I assume it no longer receives any output. However, xrandr still displays the outputs of the nvidia GPU, and the number of associated providers is still 1 - it should change to 0, no? When I use PRIME to render everything on the nvidia GPU (xrandr --setprovideroutputsource modesetting NVIDIA-0), and I execute xrandr --setprovideroutputsource modesetting 0x0, then the internal display goes blank, and the internal display disappears from the output of xrandr, and the number of associated providers goes down to zero as expected.


When I execute xrand --setprovideroutputsource NVIDIA-G0 0x0 for the second time, the server hangs or dies:

// first xrandr --setprovideroutputsource NVIDIA-G0 0x0
[  4641.429] (II) NVIDIA(G0): Setting mode "NULL"
[  4641.474] (II) modeset(0): EDID vendor "BOE", prod id 2125
[  4641.474] (II) modeset(0): Printing DDC gathered Modelines:
[  4641.474] (II) modeset(0): Modeline "1920x1080"x0.0  342.05  1920 2028 2076 2080  1080 1090 1100 1142 +hsync -vsync (164.4 kHz eP)
[  4641.500] (II) modeset(0): EDID vendor "BOE", prod id 2125
[  4641.500] (II) modeset(0): Printing DDC gathered Modelines:
[  4641.500] (II) modeset(0): Modeline "1920x1080"x0.0  342.05  1920 2028 2076 2080  1080 1090 1100 1142 +hsync -vsync (164.4 kHz eP)
[  4641.502] (II) modeset(0): EDID vendor "BOE", prod id 2125
[  4641.502] (II) modeset(0): Printing DDC gathered Modelines:
[  4641.502] (II) modeset(0): Modeline "1920x1080"x0.0  342.05  1920 2028 2076 2080  1080 1090 1100 1142 +hsync -vsync (164.4 kHz eP)
[  4641.502] (--) NVIDIA(GPU-0): DFP-0: disconnected
[  4641.502] (--) NVIDIA(GPU-0): DFP-0: Internal DisplayPort
[  4641.502] (--) NVIDIA(GPU-0): DFP-0: 2660.0 MHz maximum pixel clock
[  4641.502] (--) NVIDIA(GPU-0): 
[  4641.503] (--) NVIDIA(GPU-0): DFP-1: disconnected
[  4641.503] (--) NVIDIA(GPU-0): DFP-1: Internal TMDS
[  4641.503] (--) NVIDIA(GPU-0): DFP-1: 165.0 MHz maximum pixel clock
[  4641.503] (--) NVIDIA(GPU-0): 
[  4641.503] (--) NVIDIA(GPU-0): DFP-2: disconnected
[  4641.503] (--) NVIDIA(GPU-0): DFP-2: Internal DisplayPort
[  4641.503] (--) NVIDIA(GPU-0): DFP-2: 2660.0 MHz maximum pixel clock
[  4641.503] (--) NVIDIA(GPU-0): 
[  4641.503] (--) NVIDIA(GPU-0): DFP-3: disconnected
[  4641.503] (--) NVIDIA(GPU-0): DFP-3: Internal TMDS
[  4641.503] (--) NVIDIA(GPU-0): DFP-3: 165.0 MHz maximum pixel clock
[  4641.503] (--) NVIDIA(GPU-0): 
[  4641.533] (--) NVIDIA(GPU-0): Samsung S22F350 (DFP-4): connected
[  4641.533] (--) NVIDIA(GPU-0): Samsung S22F350 (DFP-4): Internal TMDS
[  4641.533] (--) NVIDIA(GPU-0): Samsung S22F350 (DFP-4): 600.0 MHz maximum pixel clock
[  4641.533] (--) NVIDIA(GPU-0): 
// second xrandr --setprovideroutputsource NVIDIA-G0 0x0
[  4662.608] (II) NVIDIA(G0): Setting mode "NULL"
[  4662.615] (EE) 
[  4662.615] (EE) Backtrace:
[  4662.615] (EE) 0: /usr/lib/Xorg (xorg_backtrace+0x53) [0x5608c2619c03]
[  4662.615] (EE) 1: /usr/lib/Xorg (0x5608c24d3000+0x151a45) [0x5608c2624a45]
[  4662.615] (EE) 2: /usr/lib/libc.so.6 (0x7f7addfe0000+0x3d6a0) [0x7f7ade01d6a0]
[  4662.616] (EE) 3: /usr/lib/libc.so.6 (gsignal+0x145) [0x7f7ade01d615]
[  4662.616] (EE) 4: /usr/lib/libc.so.6 (abort+0x116) [0x7f7ade006862]
[  4662.616] (EE) 5: /usr/lib/libc.so.6 (0x7f7addfe0000+0x7f5e8) [0x7f7ade05f5e8]
[  4662.616] (EE) 6: /usr/lib/libc.so.6 (0x7f7addfe0000+0x8727a) [0x7f7ade06727a]
[  4662.616] (EE) 7: /usr/lib/libc.so.6 (0x7f7addfe0000+0x88d4c) [0x7f7ade068d4c]
[  4662.616] (EE) 8: /usr/lib/xorg/modules/drivers/nvidia_drv.so (0x7f7adc9a5000+0x7af19) [0x7f7adca1ff19]
[  4662.616] (EE) 9: /usr/lib/xorg/modules/drivers/nvidia_drv.so (0x7f7adc9a5000+0x4eec1a) [0x7f7adce93c1a]
[  4662.616] (EE) 
[  4662.616] (EE) 
Fatal server error:
[  4662.616] (EE) Caught signal 6 (Aborted). Server aborting
// couple lines about consulting the X.Org foundation
// then the server hangs or terminates "gracefully"

When it hangs after the second “setprovideroutputsource”, gdb reveals this backtrace:

#0  0x00007f7ee44d2a1b in __lll_lock_wait_private () at /usr/lib/libc.so.6
#1  0x00007f7ee44d9da3 in calloc () at /usr/lib/libc.so.6
#2  0x00007f7ee46aa2ad in _dbus_pending_call_new_unlocked () at /usr/lib/libdbus-1.so.3
#3  0x00007f7ee4697cd5 in dbus_connection_send_with_reply () at /usr/lib/libdbus-1.so.3
#4  0x00007f7ee4698082 in dbus_connection_send_with_reply_and_block () at /usr/lib/libdbus-1.so.3
#5  0x000055712c87122b in ddxGiveUp ()
#6  0x000055712c851efc in FatalError ()
#7  0x000055712c857aa9 in  ()
#8  0x00007f7ee448a6a0 in <signal handler called> () at /usr/lib/libc.so.6
#9  0x00007f7ee448a615 in raise () at /usr/lib/libc.so.6
#10 0x00007f7ee4473862 in abort () at /usr/lib/libc.so.6
#11 0x00007f7ee44cc5e8 in __libc_message () at /usr/lib/libc.so.6
#12 0x00007f7ee44d427a in  () at /usr/lib/libc.so.6
#13 0x00007f7ee44d5b0c in _int_free () at /usr/lib/libc.so.6
#14 0x00007f7ee32fb0c9 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#15 0x000055712de50040 in  ()
#16 0x000055712f096300 in  ()
#17 0x000055712e086340 in  ()
#18 0x000055712e8a5e60 in  ()
#19 0x000055712e8a5fe0 in  ()
#20 0x000055712e6eaa20 in  ()
#21 0x00000000000085b1 in  ()
#22 0x00007f7ee32fb15f in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#23 0x000055712f096300 in  ()
#24 0x00007f7ee3300bc2 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#25 0x000055712f27f890 in  ()
#26 0x000055712e6eaa20 in  ()
#27 0x0000000000000000 in  ()

When it does not run into that deadlock, and dies “gracefully”, gdb shows this backtrace:

Thread 1 "Xorg" received signal SIGABRT, Aborted.
#0  0x00007fe350793615 in raise () at /usr/lib/libc.so.6
#1  0x00007fe35077c862 in abort () at /usr/lib/libc.so.6
#2  0x00007fe3507d55e8 in __libc_message () at /usr/lib/libc.so.6
#3  0x00007fe3507dd27a in  () at /usr/lib/libc.so.6
#4  0x00007fe3507de64c in _int_free () at /usr/lib/libc.so.6
#5  0x00007fe34f195f04 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#6  0x00007fe34f6040c9 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#7  0x0000557a91d97040 in  ()
#8  0x0000557a92f33470 in  ()
#9  0x0000557a91fcd310 in  ()
#10 0x0000557a927631e0 in  ()
#11 0x0000557a92763360 in  ()
#12 0x0000557a925a7da0 in  ()
#13 0x0000000000000195 in  ()
#14 0x00007fe34f60415f in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#15 0x0000557a92f33470 in  ()
#16 0x00007fe34f609bc2 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#17 0x0000557a930a4900 in  ()
#18 0x0000557a925a7da0 in  ()
#19 0x0000000000000000 in  ()

When I run the command just once, and then close my GNOME session, the X server also dies or hangs, gdb revelas a similar stack:

#0  0x00007f0820f3ca1b in __lll_lock_wait_private () at /usr/lib/libc.so.6
#1  0x00007f0820f43da3 in calloc () at /usr/lib/libc.so.6
#2  0x00007f08211142ad in _dbus_pending_call_new_unlocked () at /usr/lib/libdbus-1.so.3
#3  0x00007f0821101cd5 in dbus_connection_send_with_reply () at /usr/lib/libdbus-1.so.3
#4  0x00007f0821102082 in dbus_connection_send_with_reply_and_block () at /usr/lib/libdbus-1.so.3
#5  0x000055b0da90822b in ddxGiveUp ()
#6  0x000055b0da8e8efc in FatalError ()
#7  0x000055b0da8eeaa9 in  ()
#8  0x00007f0820ef46a0 in <signal handler called> () at /usr/lib/libc.so.6
#9  0x00007f0820ef4615 in raise () at /usr/lib/libc.so.6
#10 0x00007f0820edd862 in abort () at /usr/lib/libc.so.6
#11 0x00007f0820f365e8 in __libc_message () at /usr/lib/libc.so.6
#12 0x00007f0820f3e27a in  () at /usr/lib/libc.so.6
#13 0x00007f0820f3fb0c in _int_free () at /usr/lib/libc.so.6
#14 0x00007f081fd647f5 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#15 0x000055b0daf8c040 in  ()
#16 0x000055b0db1c2430 in  ()
#17 0x000055b0db79cdb0 in  ()
#18 0x00007f081fd5ee62 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#19 0x00007f08200f91d8 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#20 0x00007f08200f91d8 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#21 0x000055b0daf8c040 in  (
#22 0x00007f081fd5eed8 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#23 0x0000000000000000 in  ()

when the server dies:

Thread 1 "Xorg" received signal SIGTERM, Terminated.
0x00007fa521bb45de in epoll_wait () from /usr/lib/libc.so.6 
(gdb) c
Continuing.
Thread 1 "Xorg" received signal SIGSEGV, Segmentation fault.
0x00007fa521b40310 in free () from /usr/lib/libc.so.6 
(gdb) bt
#0  0x00007fa521b40310 in free () at /usr/lib/libc.so.6
#1  0x00007fa5204f3f04 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#2  0x00007fa5209617f5 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#3  0x000055eaf7aaf040 in  ()
#4  0x000055eaf7ce5320 in  ()
#5  0x000055eaf82bfdb0 in  ()
#6  0x00007fa52095be62 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#7  0x00007fa520cf61d8 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#8  0x00007fa520cf61d8 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#9  0x000055eaf7aaf040 in  ()
#10 0x00007fa52095bed8 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#11 0x0000000000000000 in  ()

and the logs:

[   591.722] (EE) 
[   591.722] (EE) Backtrace:
[   591.722] (EE) 0: /usr/lib/Xorg (xorg_backtrace+0x53) [0x55eaf5d55c03]
[   591.722] (EE) 1: /usr/lib/Xorg (0x55eaf5c0f000+0x151a45) [0x55eaf5d60a45]
[   591.723] (EE) 2: /usr/lib/libc.so.6 (0x7fa521ab4000+0x3d6a0) [0x7fa521af16a0]
[   591.723] (EE) 3: /usr/lib/libc.so.6 (cfree+0x20) [0x7fa521b40310]
[   591.723] (EE) 4: /usr/lib/xorg/modules/drivers/nvidia_drv.so (0x7fa520479000+0x7af04) [0x7fa5204f3f04]
[   591.723] (EE) 5: /usr/lib/xorg/modules/drivers/nvidia_drv.so (0x7fa520479000+0x4e87f5) [0x7fa5209617f5]
[   591.723] (EE) 
[   591.723] (EE) Segmentation fault at address 0xfffffffffffffff7
[   591.723] (EE) 
Fatal server error:
[   591.723] (EE) Caught signal 11 (Segmentation fault). Server aborting

Redirecting the stderr stream of Xorg reveals another interesting thing:

double free or corruption (!prev)

Using the master branch of the xserver repository (commit 2902b78535ecc6821cc027351818b28a5c7fdbdc), the following traces may be acquired when executing the “setprovideroutputsource” command twice:

Thread 1 "X" received signal SIGSEGV, Segmentation fault.
0x00007fcf04575310 in free () from /usr/lib/libc.so.6
(gdb) bt
#0  0x00007fcf04575310 in free () at /usr/lib/libc.so.6
#1  0x00007fcf02fc2f04 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#2  0x00007fcf0347b209 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#3  0x000056134a15cae0 in  ()
#4  0x000056134b5a8760 in  ()
#5  0x000056134a392d00 in  ()
#6  0x000056134ab28dd0 in  ()
#7  0x000056134ab28f50 in  ()
#8  0x000056134a96db60 in  ()
#9  0x0000000000000000 in  ()
(gdb) c
Continuing.

Thread 1 "X" received signal SIGABRT, Aborted.
0x00007fcf04526615 in raise () from /usr/lib/libc.so.6
(gdb) bt
#0  0x00007fcf04526615 in raise () at /usr/lib/libc.so.6
#1  0x00007fcf0450f862 in abort () at /usr/lib/libc.so.6
#2  0x0000561349d8725a in System ()
#3  0x0000561349d90a0b in AbortServer ()
#4  0x0000561349d90f2b in FatalError ()
#5  0x0000561349d83576 in OsSigHandler ()
#6  0x00007fcf045266a0 in <signal handler called> () at /usr/lib/libc.so.6
#7  0x00007fcf04575310 in free () at /usr/lib/libc.so.6
#8  0x00007fcf02fc2f04 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#9  0x00007fcf0347b209 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#10 0x000056134a15cae0 in  ()
#11 0x000056134b5a8760 in  ()
#12 0x000056134a392d00 in  ()
#13 0x000056134ab28dd0 in  ()
#14 0x000056134ab28f50 in  ()
#15 0x000056134a96db60 in  ()
#16 0x0000000000000000 in  ()

and the X logs (verbosity 9):

// first "setprovideroutputsource"
[  4734.301] (II) modeset(0): EDID vendor "BOE", prod id 2125
[  4734.301] (II) modeset(0): Printing DDC gathered Modelines:
[  4734.301] (II) modeset(0): Modeline "1920x1080"x0.0  342.05  1920 2028 2076 2080  1080 1090 1100 1142 +hsync -vsync (164.4 kHz eP)
[  4734.303] (II) modeset(0): EDID vendor "BOE", prod id 2125
[  4734.303] (II) modeset(0): Printing DDC gathered Modelines:
[  4734.303] (II) modeset(0): Modeline "1920x1080"x0.0  342.05  1920 2028 2076 2080  1080 1090 1100 1142 +hsync -vsync (164.4 kHz eP)
[  4734.303] (--) NVIDIA(GPU-0): DFP-0: disconnected
[  4734.303] (--) NVIDIA(GPU-0): DFP-0: Internal DisplayPort
[  4734.303] (--) NVIDIA(GPU-0): DFP-0: DFP is not internal to notebook
[  4734.303] (--) NVIDIA(GPU-0): DFP-0 Name Aliases:
[  4734.303] (--) NVIDIA(GPU-0):   DFP
[  4734.303] (--) NVIDIA(GPU-0):   DFP-0
[  4734.303] (--) NVIDIA(GPU-0):   DPY-0
[  4734.303] (--) NVIDIA(GPU-0):   DP-1-0
[  4734.303] (--) NVIDIA(GPU-0):   DP-1-0
[  4734.303] (--) NVIDIA(GPU-0):   Connector-1
[  4734.303] (--) NVIDIA(GPU-0): DFP-0: 2660.0 MHz maximum pixel clock
[  4734.303] (--) NVIDIA(GPU-0): 
[  4734.304] (--) NVIDIA(GPU-0): DFP-1: disconnected
[  4734.304] (--) NVIDIA(GPU-0): DFP-1: Internal TMDS
[  4734.304] (--) NVIDIA(GPU-0): DFP-1: DFP is not internal to notebook
[  4734.304] (--) NVIDIA(GPU-0): DFP-1 Name Aliases:
[  4734.304] (--) NVIDIA(GPU-0):   DFP
[  4734.304] (--) NVIDIA(GPU-0):   DFP-1
[  4734.304] (--) NVIDIA(GPU-0):   DPY-1
[  4734.304] (--) NVIDIA(GPU-0):   DP-1-1
[  4734.304] (--) NVIDIA(GPU-0):   DP-1-1
[  4734.304] (--) NVIDIA(GPU-0):   Connector-1
[  4734.304] (--) NVIDIA(GPU-0): DFP-1: 165.0 MHz maximum pixel clock
[  4734.304] (--) NVIDIA(GPU-0): 
[  4734.304] (--) NVIDIA(GPU-0): DFP-2: disconnected
[  4734.304] (--) NVIDIA(GPU-0): DFP-2: Internal DisplayPort
[  4734.304] (--) NVIDIA(GPU-0): DFP-2: DFP is not internal to notebook
[  4734.304] (--) NVIDIA(GPU-0): DFP-2 Name Aliases:
[  4734.304] (--) NVIDIA(GPU-0):   DFP
[  4734.304] (--) NVIDIA(GPU-0):   DFP-2
[  4734.304] (--) NVIDIA(GPU-0):   DPY-2
[  4734.304] (--) NVIDIA(GPU-0):   DP-1-2
[  4734.304] (--) NVIDIA(GPU-0):   DP-1-2
[  4734.304] (--) NVIDIA(GPU-0):   Connector-2
[  4734.304] (--) NVIDIA(GPU-0): DFP-2: 2660.0 MHz maximum pixel clock
[  4734.304] (--) NVIDIA(GPU-0): 
[  4734.304] (--) NVIDIA(GPU-0): DFP-3: disconnected
[  4734.304] (--) NVIDIA(GPU-0): DFP-3: Internal TMDS
[  4734.304] (--) NVIDIA(GPU-0): DFP-3: DFP is not internal to notebook
[  4734.304] (--) NVIDIA(GPU-0): DFP-3 Name Aliases:
[  4734.304] (--) NVIDIA(GPU-0):   DFP
[  4734.304] (--) NVIDIA(GPU-0):   DFP-3
[  4734.304] (--) NVIDIA(GPU-0):   DPY-3
[  4734.304] (--) NVIDIA(GPU-0):   DP-1-3
[  4734.304] (--) NVIDIA(GPU-0):   DP-1-3
[  4734.304] (--) NVIDIA(GPU-0):   Connector-2
[  4734.304] (--) NVIDIA(GPU-0): DFP-3: 165.0 MHz maximum pixel clock
[  4734.304] (--) NVIDIA(GPU-0): 
[  4734.304] (--) NVIDIA(GPU-0): DFP-4: disconnected
[  4734.304] (--) NVIDIA(GPU-0): DFP-4: Internal TMDS
[  4734.304] (--) NVIDIA(GPU-0): DFP-4: DFP is not internal to notebook
[  4734.304] (--) NVIDIA(GPU-0): DFP-4 Name Aliases:
[  4734.304] (--) NVIDIA(GPU-0):   DFP
[  4734.304] (--) NVIDIA(GPU-0):   DFP-4
[  4734.304] (--) NVIDIA(GPU-0):   DPY-4
[  4734.304] (--) NVIDIA(GPU-0):   HDMI-1-0
[  4734.304] (--) NVIDIA(GPU-0):   HDMI-1-0
[  4734.304] (--) NVIDIA(GPU-0):   Connector-0
[  4734.304] (--) NVIDIA(GPU-0): DFP-4: 165.0 MHz maximum pixel clock
[  4734.304] (--) NVIDIA(GPU-0): 
[  4734.306] (II) modeset(0): EDID vendor "BOE", prod id 2125
[  4734.306] (II) modeset(0): Printing DDC gathered Modelines:
[  4734.306] (II) modeset(0): Modeline "1920x1080"x0.0  342.05  1920 2028 2076 2080  1080 1090 1100 1142 +hsync -vsync (164.4 kHz eP)
[  4735.348] (II) NVIDIA(G0): NoScanout X screen configured with resolution 640x480
[  4735.348] (II) NVIDIA(G0):     (default)
[  4735.349] (II) NVIDIA(G0): Setting mode "NULL"
[  4735.356] (II) modeset(0): EDID vendor "BOE", prod id 2125
[  4735.356] (II) modeset(0): Printing DDC gathered Modelines:
[  4735.356] (II) modeset(0): Modeline "1920x1080"x0.0  342.05  1920 2028 2076 2080  1080 1090 1100 1142 +hsync -vsync (164.4 kHz eP)
// second "setprovideroutputsource"
[  4797.544] (II) NVIDIA(G0): NoScanout X screen configured with resolution 640x480
[  4797.544] (II) NVIDIA(G0):     (default)
[  4797.545] (II) NVIDIA(G0): Setting mode "NULL"
[  4800.254] (EE) 
[  4800.254] (EE) Backtrace:
[  4800.254] (EE) 0: /usr/bin/X (xorg_backtrace+0xc0) [0x561349d7eed0]
[  4800.254] (EE) 1: /usr/bin/X (0x561349bc5000+0x1be4bd) [0x561349d834bd]
[  4800.254] (EE) 2: /usr/lib/libc.so.6 (0x7fcf044e9000+0x3d6a0) [0x7fcf045266a0]
[  4800.254] (EE) 3: /usr/lib/libc.so.6 (cfree+0x20) [0x7fcf04575310]
[  4800.254] (EE) 4: /usr/lib/xorg/modules/drivers/nvidia_drv.so (0x7fcf02f48000+0x7af04) [0x7fcf02fc2f04]
[  4800.254] (EE) 5: /usr/lib/xorg/modules/drivers/nvidia_drv.so (0x7fcf02f48000+0x533209) [0x7fcf0347b209]
[  4800.254] (EE) 
[  4800.254] (EE) Segmentation fault at address 0x0
[  4800.254] (EE) 
Fatal server error:
[  4800.254] (EE) Caught signal 11 (Segmentation fault). Server aborting

My theory is that the nvidia X driver wants to free something that it has already freed - I assume - the first time, and depending on what the memory layout happens to be, either glibc can catch it (SIGABRT + double free warning), or it cannot (SIGSEGV + SIGABRT). But I can only guess. Honestly, I cannot understand why the outputs don’t disappear after the xrandr --setprovideroutputsource NVIDIA-G0 0x0 is run.

Very interestingly, one time I traced Xorg with ltrace (ltrace -C -f -S -t -p $(pidof Xorg)), and for some perculiar reason, the “setprovideroutputsource” command ran without error twice, and the outputs of the nvidia GPU did disappear. Alas, they returned after reboot.

Another that may be interesting: using 440.100 (but not 450.57 or 450.66), nvidia-smi reports corrupted infoROM after the nvidia GPU goes to sleep, but not before. Someone else who has the exact same laptop that I do confirmed that the same warning is printed using 440.100 on their machine as well (but not with 450.66). (so I hope it was a bug in the 440.100 series drivers)

As far as I see, the kernel driver doesn’t complain about anything, there are no warnings/errors.

I’ll gladly provide more logs/traces if needed. Thank you for reading this far.

Running a Dell Precision 7530, with an Intel UHD 630 iGPU, and NVIDIA Quadro RTX 5000 Max-Q dGPU. I have a rather unusual display output layout:

This image itself taken from this technical page (especially top left corner, detailing ‘DGFF card’)

I observe the same issues as everyone else:

  1. Setting the monitor connected to the NVIDIA GPU as the only display (aka NVIDIA-G0 sink) causes extreme compositing lag; however, the cursor works at full refresh rate. This is unusable. Clearly the reverse PRIME implementation needs work.

  2. Mirroring/extending the laptop and NVIDIA-G0 sink displays mitigates this problem somewhat, but there is still obvious lag and latency. My 144 Hz LG 27GL83-A display with G-Sync can only run at 75 Hz in this mode, and there is obvious tearing, etc.

A great new feature, but the implementation leaves a lot to be desired.

It is to be noted that my notebook (as shown in the image above) allows the dGPU to run alone, and in this case, KDE is composited perfectly, and the driver even detects and enables the G-Sync capability.

@aplattner, @amrits, @agoins

I try to refrain from it, and I’m sorry for pinging, but I’d really appreciate any input regarding my issue described in a previous comment.

@SRSR333
The compositing lag you observe when you only have a Reverse PRIME display is caused by a limitation in X Present, it can’t sync to PRIME sinks. It works with NVIDIA-based PRIME Sync (non-reverse) because the NVIDIA driver implements its own mechanism to allow vsync, but when the NVIDIA GPU is the sink, as in Reverse PRIME, we have to rely on what the server supports. There is some work being done upstream on this: Sync present to slave outputs (!460) · Merge requests · xorg / xserver · GitLab. Once that is implemented, some further work will be required on the NVIDIA side to support it. Unfortunately, the only current workaround is to have a non-Reverse PRIME display set as the primary in RandR, or to disable vsync.

@pobrn
Thanks for reporting that issue, I was able to reproduce it and a fix is in progress, tracked under internal bug number 3115486.

Thank you for the reply, I’m looking forward to the bug fix.

I’m using a 15" 4K 60Hz laptop and a 24" 4K 60Hz display via reverse Prime (HD520/M1000M), sync appears to be working as is render offload, no power management for me I suppose, too old a model. And while everything works it does so noticeably slow, and Xorg constantly utilizes 15% of CPU and 50% of dGPU time when the external display is connected. I’ve looked at the logs and didn’t see anything unusual and I see that unreasonably high system utilization has been mentioned here already. I include a report log just in case that it might be helpful.
EDIT: found another issue, external display won’t power off as configured in system settings.
nvidia-bug-report.log.gz (344.1 KB)

hey @agoins, any update on the issues reported by @SenojEkul and @TauAkiou? I am experiencing the same issue with a 3440x1440 external monitor. I have seen the same behavior through HDMI and DisplayPort through a thunderbolt dock. This significantly impacts the usability of my optimus laptop ( ThinkPad P53 ).

@jcstryker

If you mean the issue with lag (1 FPS) when Reverse PRIME outputs are the only active outputs, this is an upstream issue, as mentioned here.

If you mean the issue where the desktop spans multiple displays, this will be fixed in an upcoming release.

It looks like driver 455.23.04 does indeed fix the reverse-prime. This is pretty awesome! Most games I tried seem to run pretty well using reverse-prime + prime-offload too.

I think there may a return of the vulkan-half-refresh issue though? Hard to say. The game that I have issues with is Wreckfest, with the beta driver I get about half of previous performance I get with 450.66

Other than the janky desktop refreshing this is awesome.

noticing this as well in World of Warcraft. about half the frame rate as 450.66

1 Like

I did a bunch more testing with 455.23.04.

No matter what, reverse-prime really isn’t very smooth. It’s stuttery, the difference between PRIME mode, and reverse-PRIME is night and day. There is also a performance drop of it seems 10-20fps average drop across the dozen games I tried.

To test things a bit easier I set games to use a 1080p window.

First i had this window on external reverse-PRIME; there was a performance drop ~15fps, and stuttery, as if it were dropping frames.

I then dragged the window to laptop screen. Same results as above (didn’t close game).

I then unplugged the external, and gained ~15fps, plus the game became smoother than glass.

Plugged the external back in. Stutter came back. Desktop is also stuttery (Gnome, fedora 33).

Pleas note: the results above are exactly the same as what I get with fullscreen exclusive games in all the scenarios.

I’m absolutely over the moon with the ability to plug in a screen while in offload mode. This really makes my life 11x better already. Just need it to be smooth.

Is it possible that is looks and feels stuttery because it is dropping frames and that’s also why frame-rates are lower?

EDIT: One additional datapoint - setting the displays as mirrored is very smooth, and no performance drop. External as single display is still 1fps.

Hi!
Is the issue with amdgpu fixed in 455.23.04? I am still getting just a black screen.
Does it work better with modesettings driver?
I can do more testing if need and provide some logs/my experience.
I have a ASUS Zephyres G14 with Ryzen 7 4800HS and 1660Ti as the dGPU.

I did some testing but couldn’t get past the black screen either.

Pretty much the same issues as @SenojEkul. It works, external display is still 1fps (awaiting progress on Sync present to slave outputs (!460) · Merge requests · xorg / xserver · GitLab), but using display extension is less then optimal.

@agoins Are the 1fps issue for external displays and the reduced performance on extended displays related at all? I’m going to try disabling vsync and see if that does anything in an extended environment.

455.23.04 fixes the issue where some resolutions would cause corruption with Reverse PRIME, and the issue where the desktop would span multiple outputs in some configurations.

@pobrn: A fix for the crash when disabling the RandR provider will be included in an upcoming release.

@dragonn.ms, @arthurprs: AMDGPU not working is a known issue, we’ve been working on this for a while now with internal bug 2759189.

@TauAkiou: The issue with Reverse PRIME-only configurations being throttled to 1 FPS is somewhat related to the issue where apps may appear to stutter on Reverse PRIME outputs. The X server’s Present extension implementation is responsible for implementing vsync for iGPU and PRIME Render Offload applications, and it doesn’t currently support synchronizing to PRIME (in this case, Reverse PRIME) outputs. The current behavior is to fall back to the RandR primary output, as long as it is non-PRIME. As a result, if you have an active non-PRIME output, such as the laptop’s internal panel, apps will sync to that. The two outputs will likely refresh at slightly different times, leading to stutter. Do you find that the stuttering goes away when vsync is disabled? If you are using a compositor and the app isn’t fullscreen unredirected, you may also need to disable vsync for the compositor.

Likewise, if there is no non-PRIME output for X Present to sync to, it will fall back to the default behavior of throttling to 1 FPS. This should also be resolved by disabling vsync.

@SenojEkul: See above for the possible cause of the stutter that you are seeing. The FPS drop you are seeing may be related to overhead introduced by PRIME Render Offload and Reverse PRIME. In order for a PRIME Render Offload app to be shown on the iGPU’s desktop, the contents of the window have to be copied across the PCIe bus into system memory, incurring bandwidth overhead. Then, in order for the iGPU’s desktop to be displayed on a dGPU output, Reverse PRIME has to copy that region of the desktop across the PCIe bus again into video memory, incurring more bandwidth overhead. These two combined can result in significant bandwidth usage that could affect performance, especially for laptops and eGPUs that are limited to 2-4 PCIe lanes.

A future driver release will introduce an optimization that avoids the overhead from both PRIME Render Offload and Reverse PRIME for fullscreen, unoccluded, unredirected windows, where the dGPU can just display the app’s contents directly. In this case, the bandwidth overhead should be no more than a native desktop.

2 Likes

@agoins this is really exciting news. How soon will we see “optimization that avoids the overhead from both PRIME Render Offload and Reverse PRIME for fullscreen”?

Is there anything that can be done to alleviate desktop stutter in reverse-prime mode?

FWIW, whatever XFWM is doing seems to be working right. no stuttering on the PRIME display. maybe try out xfce4 with compositing enabled if you want to give it a try

I just installed XFCE on fedora to check this. With the XFCE compositor enabled the desktop has the same sort of micro-stutter as Gnome. With compositor disabled it runs rather smooth but, no v-sync.
Games sort of seem to run better?

It’s almost as if the dGPU needs to draw twice as fast to compensate for the buffer I/O?

How does Windows achieve smooth framerates? Hmm, then again Windows seems to suffer the same fate (not in games) every few seconds.