The all new OutputSink feature aka reverse PRIME

Let’s gather data and info about the feature introduced with the 450 driver.
Starting to fill with questions
Q: Does this work with PRIME synchronisation?

Q: Does this work with Render Offload?

Q: Does this work with D3/runtime power management?

Feel free to share your adventures and failures during those.

A: It doesn’t appear to. External display is highly lagged
A: Yes
A: Yes, once screen is unplugged

Also though, in Prime mode the external screen seems to be undetected/unused (but still shows in gnome-control-center) when the laptop wakes from sleep.

The main issues I have, and seemingly many others is:

  • in reverse-prime mode the external + internal screens are treated as one large virtual screen with the smaller screen showing only a portion of the virtual screen.
  • laggy external display
  • above a certain res threshold the ext-display looks like a scrambled CRT (v-sync invalid on CRT style).

This is what I get with defaults suggested by Nvidia. I tried to center the screens vertically. If you look at the wallpaper you can see which portion of the virtual spanned desktop is being shown by the laptop screen.

Still can not get native res on external.

Nothing has changed between 550.52 and 550.57, pretty much.

My results are virtually the same as SenojEkul’s.

  • Synchronization doesn’t work.
  • Yes.
  • I wouldn’t quite know how to tell.

Essentially, Reverse PRIME is still completely unusable.

No change with 450.57

  • External as single display is extremely laggy
  • Same virtual desktop issue as above post with photos
  • Virtual desktop means that tiling windows left/right spans both screens
  • Native res on external of 3440x1440 is warped like a CRT with incorrect v-sync
1 Like

I have a notebook with a 1060, and tried it with an old 1080p TV so far. Distro is Debian unstable, DE is gnome3 (with mutter 3.36.3), Xorg 1.20.8, with 450.57 driver installed from experimental.

What works:

  • Prime synchronization seems to work
  • render offload works
  • visually works ok, display resolutions are good
  • gnome’s display settings detect and manage the screens correctly
  • blanking and unblanking works

Issues:

  • no runtime power management, as driver still has no proper support for it on this card
  • results in very high CPU usage: 20-40% Xorg and 9-20% nv_queue
  • when only external display is configured it becomes extremely slow – clicks and keypresses take effect seconds too late

Overall it works surprisingly well, the biggest issue seems to be the cpu usage for my personal use

Using nVidia as the main output device for the DE and another GPU as the graphics accelerator still doesn’t seem to work in 450.57 on Arch.

I’m hoping that one day I’ll be able to fully utilize the GPU I bought 7 years ago. So far, the mostly amateur hobby-grade community of open-source enthusiasts has done a better job than this multi-billion corporation, despite the said company’s obstructions. Switching to nouveau, again…

1 Like

In my hardware, an AMD Ryzen laptop with dedicated Nvidia GTX 1650 card, Reverse PRIME is kind of not working but working at the same time.

I already had an Xorg configuration file for Prime Offloading (AMD integrated GPU used by default) and that was working well, the issue I have with Reverse PRIME is that I get a black screen when I connect an external monitor, it may seem like it’s not detecting the HDMI monitor but that would be a false assumption, there is actually a video signal being sent but I think the framebuffer is not being copied at all. I know that’s the issue because when I go into Nvidia settings and change the gamma and brightness options the display actually goes through different tones of white while I change the settings.

This problem is not exclusive to the proprietary Nvidia driver though, if I do the same with the Nouveau driver there is also a black screen but in Nouveau the cursor also shows up. I heard that this might be an issue with dma_buf in the amdgpu driver (it was supposedly fixed on Intel) and with Nouveau I got a kernel stack trace when connecting the HDMI cable so I’m going to report this to the Linux DRI bug report infrastructure.

1 Like

Reverse PRIME is known not to work with amdgpu. For future reference, we’re investigating what goes wrong with mapping its dma_bufs in internal bug number 2759189. The bug tracker isn’t public, but you can use that number to refer to this issue.

The garbled screen with some reverse PRIME resolutions is being investigated in internal bug number 200627069 and it looks like the problem has been identified and a fix is in progress.

So after some further testing it seems that scaling works incorrectly: if the framebuffer is bigger than the display’s resolution, only a part of it will be rendered. However the hardware cursor still renders even “outside” the image (see attachment)

1 Like

I can confirm I have the exact same issue:

  • Driver 450.57

  • Thinkpad X1Eg1 with 1050 TI MaxQ and Intel Video on Kernel 5.7.8-1

It works on my ThinkPad P52, but is unusable in this state unfortunately.

  • It’s horribly slow (almost unusable) on a Lenovo ThinkPad Thunderbolt 3 Docking Station (works fine with bumblebee and xf86-video-intel)
  • It only detects 1 external screen, while 2 are connected
  • It does display the second screen in xrandr, but the screens are called 23 (as in 2 and 3 together) and there doesn’t seem to be a way to split them
  • My laptop screen is 1920x1080 and my monitors are: 1920x1200 and have the issue described above with the black bars.

Unusable for me unfortunately so far
Kernel: 5.7.6
Driver: 450.57

Hi nvaert1986,

Could this be because your laptop resolution is lower than the monitor, hence opposite of us? Do you notice part of the desktop missing on the external monitor in the vertical direction?

My laptop resolution is 1920x1080. The resolution of my monitors is 1920x1200 and I’ve a black horizontal bar almost on the bottom (but not completely on the bottom) of my external screens when the resolution of the external monitors is set to 1920x1200. Below that is like a little bit of wallpaper, which is visible, but the area is unusable. So the black bar in gnome is not completely on the bottom, as it’s supposed to be (using the Window List extension in Gnome). It looks the Window List extension is exactly set at 1920x1080, because the amount of pixels that are missing, is approximately the leftover.

I see there is another tracking issue in another post as 3063041.

Can you give a status update on these? I’ve been running a small discord server for people with ASUS gaming laptops and a lot of people are very excited by the possibilities this will enable.

I’m experiencing a serious issue with reverse PRIME: I cannot turn it off. It may seem strange, but it is true. I followed this guide to set it up. The only difference is that I use the following configuration:

Section "ServerLayout"
        Identifier "layout"
        Screen 0 "intel"
        Inactive "nvidia"
        Option "AllowNVIDIAGPUScreens"
EndSection

Section "Device"
        Identifier "intel"
        Driver "modesetting"
        BusID "PCI:0:2:0"
        Option "DRI" "3"
EndSection

Section "Screen"
        Identifier "intel"
        Device "intel"
EndSection

Section "Device"
        Identifier "nvidia"
        Driver "nvidia"
        BusID "PCI:1:0:0"
EndSection

Section "Screen"
        Identifier "nvidia"
        Device "nvidia"
EndSection

Then I ran xrandr --setprovideroutputsource NVIDIA-G0 modesetting and xrandr --auto. And all was well, because the outputs of the dGPU showed up in the output of xrandr. And I was indeed able to connect an external monitor, and it worked, apart from the fact that using GNOME, if the external monitor is positioned above the internal one, then it acts as if both monitors were just a single one: for example, making an app full screen causes it to span both monitors.

xrandr --listproviders reports:

Providers: number : 2
Provider 0: id: 0x43 cap: 0xf, Source Output, Sink Output, Source Offload, Sink Offload crtcs: 3 outputs: 1 associated providers: 1 name:modesetting
Provider 1: id: 0x26e cap: 0x2, Sink Output crtcs: 4 outputs: 5 associated providers: 1 name:NVIDIA-G0

Unfortunately, I cannot turn it off. I tried:

  • restarting X
  • rebooting
  • downgrading the drivers to 440.100 and then upgrading
  • disabling optimus-manager
  • manually starting X
  • disabling D3 power management
  • both 450.57 and 450.66 series drivers
  • the latest version of X from git

and combinations of those. The setting persists.

NVIDIA driver version: 450.66
XOrg server version: 1.20.9-1 (Manjaro)
XOrg ANSI C Emulation: 0.4
XOrg Video Driver: 24.1
XOrg XInput driver : 24.1
XOrg Server Extension : 10.0
Linux kernel version: 5.4.60-2-MANJARO
DE: GNOME 3.36.5
DM: GDM

When an external display is connected (HDMI), and I run xrandr --setprovideroutputsource NVIDIA-G0 0x0, then the external display turns off, so I assume it no longer receives any output. However, xrandr still displays the outputs of the nvidia GPU, and the number of associated providers is still 1 - it should change to 0, no? When I use PRIME to render everything on the nvidia GPU (xrandr --setprovideroutputsource modesetting NVIDIA-0), and I execute xrandr --setprovideroutputsource modesetting 0x0, then the internal display goes blank, and the internal display disappears from the output of xrandr, and the number of associated providers goes down to zero as expected.


When I execute xrand --setprovideroutputsource NVIDIA-G0 0x0 for the second time, the server hangs or dies:

// first xrandr --setprovideroutputsource NVIDIA-G0 0x0
[  4641.429] (II) NVIDIA(G0): Setting mode "NULL"
[  4641.474] (II) modeset(0): EDID vendor "BOE", prod id 2125
[  4641.474] (II) modeset(0): Printing DDC gathered Modelines:
[  4641.474] (II) modeset(0): Modeline "1920x1080"x0.0  342.05  1920 2028 2076 2080  1080 1090 1100 1142 +hsync -vsync (164.4 kHz eP)
[  4641.500] (II) modeset(0): EDID vendor "BOE", prod id 2125
[  4641.500] (II) modeset(0): Printing DDC gathered Modelines:
[  4641.500] (II) modeset(0): Modeline "1920x1080"x0.0  342.05  1920 2028 2076 2080  1080 1090 1100 1142 +hsync -vsync (164.4 kHz eP)
[  4641.502] (II) modeset(0): EDID vendor "BOE", prod id 2125
[  4641.502] (II) modeset(0): Printing DDC gathered Modelines:
[  4641.502] (II) modeset(0): Modeline "1920x1080"x0.0  342.05  1920 2028 2076 2080  1080 1090 1100 1142 +hsync -vsync (164.4 kHz eP)
[  4641.502] (--) NVIDIA(GPU-0): DFP-0: disconnected
[  4641.502] (--) NVIDIA(GPU-0): DFP-0: Internal DisplayPort
[  4641.502] (--) NVIDIA(GPU-0): DFP-0: 2660.0 MHz maximum pixel clock
[  4641.502] (--) NVIDIA(GPU-0): 
[  4641.503] (--) NVIDIA(GPU-0): DFP-1: disconnected
[  4641.503] (--) NVIDIA(GPU-0): DFP-1: Internal TMDS
[  4641.503] (--) NVIDIA(GPU-0): DFP-1: 165.0 MHz maximum pixel clock
[  4641.503] (--) NVIDIA(GPU-0): 
[  4641.503] (--) NVIDIA(GPU-0): DFP-2: disconnected
[  4641.503] (--) NVIDIA(GPU-0): DFP-2: Internal DisplayPort
[  4641.503] (--) NVIDIA(GPU-0): DFP-2: 2660.0 MHz maximum pixel clock
[  4641.503] (--) NVIDIA(GPU-0): 
[  4641.503] (--) NVIDIA(GPU-0): DFP-3: disconnected
[  4641.503] (--) NVIDIA(GPU-0): DFP-3: Internal TMDS
[  4641.503] (--) NVIDIA(GPU-0): DFP-3: 165.0 MHz maximum pixel clock
[  4641.503] (--) NVIDIA(GPU-0): 
[  4641.533] (--) NVIDIA(GPU-0): Samsung S22F350 (DFP-4): connected
[  4641.533] (--) NVIDIA(GPU-0): Samsung S22F350 (DFP-4): Internal TMDS
[  4641.533] (--) NVIDIA(GPU-0): Samsung S22F350 (DFP-4): 600.0 MHz maximum pixel clock
[  4641.533] (--) NVIDIA(GPU-0): 
// second xrandr --setprovideroutputsource NVIDIA-G0 0x0
[  4662.608] (II) NVIDIA(G0): Setting mode "NULL"
[  4662.615] (EE) 
[  4662.615] (EE) Backtrace:
[  4662.615] (EE) 0: /usr/lib/Xorg (xorg_backtrace+0x53) [0x5608c2619c03]
[  4662.615] (EE) 1: /usr/lib/Xorg (0x5608c24d3000+0x151a45) [0x5608c2624a45]
[  4662.615] (EE) 2: /usr/lib/libc.so.6 (0x7f7addfe0000+0x3d6a0) [0x7f7ade01d6a0]
[  4662.616] (EE) 3: /usr/lib/libc.so.6 (gsignal+0x145) [0x7f7ade01d615]
[  4662.616] (EE) 4: /usr/lib/libc.so.6 (abort+0x116) [0x7f7ade006862]
[  4662.616] (EE) 5: /usr/lib/libc.so.6 (0x7f7addfe0000+0x7f5e8) [0x7f7ade05f5e8]
[  4662.616] (EE) 6: /usr/lib/libc.so.6 (0x7f7addfe0000+0x8727a) [0x7f7ade06727a]
[  4662.616] (EE) 7: /usr/lib/libc.so.6 (0x7f7addfe0000+0x88d4c) [0x7f7ade068d4c]
[  4662.616] (EE) 8: /usr/lib/xorg/modules/drivers/nvidia_drv.so (0x7f7adc9a5000+0x7af19) [0x7f7adca1ff19]
[  4662.616] (EE) 9: /usr/lib/xorg/modules/drivers/nvidia_drv.so (0x7f7adc9a5000+0x4eec1a) [0x7f7adce93c1a]
[  4662.616] (EE) 
[  4662.616] (EE) 
Fatal server error:
[  4662.616] (EE) Caught signal 6 (Aborted). Server aborting
// couple lines about consulting the X.Org foundation
// then the server hangs or terminates "gracefully"

When it hangs after the second “setprovideroutputsource”, gdb reveals this backtrace:

#0  0x00007f7ee44d2a1b in __lll_lock_wait_private () at /usr/lib/libc.so.6
#1  0x00007f7ee44d9da3 in calloc () at /usr/lib/libc.so.6
#2  0x00007f7ee46aa2ad in _dbus_pending_call_new_unlocked () at /usr/lib/libdbus-1.so.3
#3  0x00007f7ee4697cd5 in dbus_connection_send_with_reply () at /usr/lib/libdbus-1.so.3
#4  0x00007f7ee4698082 in dbus_connection_send_with_reply_and_block () at /usr/lib/libdbus-1.so.3
#5  0x000055712c87122b in ddxGiveUp ()
#6  0x000055712c851efc in FatalError ()
#7  0x000055712c857aa9 in  ()
#8  0x00007f7ee448a6a0 in <signal handler called> () at /usr/lib/libc.so.6
#9  0x00007f7ee448a615 in raise () at /usr/lib/libc.so.6
#10 0x00007f7ee4473862 in abort () at /usr/lib/libc.so.6
#11 0x00007f7ee44cc5e8 in __libc_message () at /usr/lib/libc.so.6
#12 0x00007f7ee44d427a in  () at /usr/lib/libc.so.6
#13 0x00007f7ee44d5b0c in _int_free () at /usr/lib/libc.so.6
#14 0x00007f7ee32fb0c9 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#15 0x000055712de50040 in  ()
#16 0x000055712f096300 in  ()
#17 0x000055712e086340 in  ()
#18 0x000055712e8a5e60 in  ()
#19 0x000055712e8a5fe0 in  ()
#20 0x000055712e6eaa20 in  ()
#21 0x00000000000085b1 in  ()
#22 0x00007f7ee32fb15f in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#23 0x000055712f096300 in  ()
#24 0x00007f7ee3300bc2 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#25 0x000055712f27f890 in  ()
#26 0x000055712e6eaa20 in  ()
#27 0x0000000000000000 in  ()

When it does not run into that deadlock, and dies “gracefully”, gdb shows this backtrace:

Thread 1 "Xorg" received signal SIGABRT, Aborted.
#0  0x00007fe350793615 in raise () at /usr/lib/libc.so.6
#1  0x00007fe35077c862 in abort () at /usr/lib/libc.so.6
#2  0x00007fe3507d55e8 in __libc_message () at /usr/lib/libc.so.6
#3  0x00007fe3507dd27a in  () at /usr/lib/libc.so.6
#4  0x00007fe3507de64c in _int_free () at /usr/lib/libc.so.6
#5  0x00007fe34f195f04 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#6  0x00007fe34f6040c9 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#7  0x0000557a91d97040 in  ()
#8  0x0000557a92f33470 in  ()
#9  0x0000557a91fcd310 in  ()
#10 0x0000557a927631e0 in  ()
#11 0x0000557a92763360 in  ()
#12 0x0000557a925a7da0 in  ()
#13 0x0000000000000195 in  ()
#14 0x00007fe34f60415f in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#15 0x0000557a92f33470 in  ()
#16 0x00007fe34f609bc2 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#17 0x0000557a930a4900 in  ()
#18 0x0000557a925a7da0 in  ()
#19 0x0000000000000000 in  ()

When I run the command just once, and then close my GNOME session, the X server also dies or hangs, gdb revelas a similar stack:

#0  0x00007f0820f3ca1b in __lll_lock_wait_private () at /usr/lib/libc.so.6
#1  0x00007f0820f43da3 in calloc () at /usr/lib/libc.so.6
#2  0x00007f08211142ad in _dbus_pending_call_new_unlocked () at /usr/lib/libdbus-1.so.3
#3  0x00007f0821101cd5 in dbus_connection_send_with_reply () at /usr/lib/libdbus-1.so.3
#4  0x00007f0821102082 in dbus_connection_send_with_reply_and_block () at /usr/lib/libdbus-1.so.3
#5  0x000055b0da90822b in ddxGiveUp ()
#6  0x000055b0da8e8efc in FatalError ()
#7  0x000055b0da8eeaa9 in  ()
#8  0x00007f0820ef46a0 in <signal handler called> () at /usr/lib/libc.so.6
#9  0x00007f0820ef4615 in raise () at /usr/lib/libc.so.6
#10 0x00007f0820edd862 in abort () at /usr/lib/libc.so.6
#11 0x00007f0820f365e8 in __libc_message () at /usr/lib/libc.so.6
#12 0x00007f0820f3e27a in  () at /usr/lib/libc.so.6
#13 0x00007f0820f3fb0c in _int_free () at /usr/lib/libc.so.6
#14 0x00007f081fd647f5 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#15 0x000055b0daf8c040 in  ()
#16 0x000055b0db1c2430 in  ()
#17 0x000055b0db79cdb0 in  ()
#18 0x00007f081fd5ee62 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#19 0x00007f08200f91d8 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#20 0x00007f08200f91d8 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#21 0x000055b0daf8c040 in  (
#22 0x00007f081fd5eed8 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#23 0x0000000000000000 in  ()

when the server dies:

Thread 1 "Xorg" received signal SIGTERM, Terminated.
0x00007fa521bb45de in epoll_wait () from /usr/lib/libc.so.6 
(gdb) c
Continuing.
Thread 1 "Xorg" received signal SIGSEGV, Segmentation fault.
0x00007fa521b40310 in free () from /usr/lib/libc.so.6 
(gdb) bt
#0  0x00007fa521b40310 in free () at /usr/lib/libc.so.6
#1  0x00007fa5204f3f04 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#2  0x00007fa5209617f5 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#3  0x000055eaf7aaf040 in  ()
#4  0x000055eaf7ce5320 in  ()
#5  0x000055eaf82bfdb0 in  ()
#6  0x00007fa52095be62 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#7  0x00007fa520cf61d8 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#8  0x00007fa520cf61d8 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#9  0x000055eaf7aaf040 in  ()
#10 0x00007fa52095bed8 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#11 0x0000000000000000 in  ()

and the logs:

[   591.722] (EE) 
[   591.722] (EE) Backtrace:
[   591.722] (EE) 0: /usr/lib/Xorg (xorg_backtrace+0x53) [0x55eaf5d55c03]
[   591.722] (EE) 1: /usr/lib/Xorg (0x55eaf5c0f000+0x151a45) [0x55eaf5d60a45]
[   591.723] (EE) 2: /usr/lib/libc.so.6 (0x7fa521ab4000+0x3d6a0) [0x7fa521af16a0]
[   591.723] (EE) 3: /usr/lib/libc.so.6 (cfree+0x20) [0x7fa521b40310]
[   591.723] (EE) 4: /usr/lib/xorg/modules/drivers/nvidia_drv.so (0x7fa520479000+0x7af04) [0x7fa5204f3f04]
[   591.723] (EE) 5: /usr/lib/xorg/modules/drivers/nvidia_drv.so (0x7fa520479000+0x4e87f5) [0x7fa5209617f5]
[   591.723] (EE) 
[   591.723] (EE) Segmentation fault at address 0xfffffffffffffff7
[   591.723] (EE) 
Fatal server error:
[   591.723] (EE) Caught signal 11 (Segmentation fault). Server aborting

Redirecting the stderr stream of Xorg reveals another interesting thing:

double free or corruption (!prev)

Using the master branch of the xserver repository (commit 2902b78535ecc6821cc027351818b28a5c7fdbdc), the following traces may be acquired when executing the “setprovideroutputsource” command twice:

Thread 1 "X" received signal SIGSEGV, Segmentation fault.
0x00007fcf04575310 in free () from /usr/lib/libc.so.6
(gdb) bt
#0  0x00007fcf04575310 in free () at /usr/lib/libc.so.6
#1  0x00007fcf02fc2f04 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#2  0x00007fcf0347b209 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#3  0x000056134a15cae0 in  ()
#4  0x000056134b5a8760 in  ()
#5  0x000056134a392d00 in  ()
#6  0x000056134ab28dd0 in  ()
#7  0x000056134ab28f50 in  ()
#8  0x000056134a96db60 in  ()
#9  0x0000000000000000 in  ()
(gdb) c
Continuing.

Thread 1 "X" received signal SIGABRT, Aborted.
0x00007fcf04526615 in raise () from /usr/lib/libc.so.6
(gdb) bt
#0  0x00007fcf04526615 in raise () at /usr/lib/libc.so.6
#1  0x00007fcf0450f862 in abort () at /usr/lib/libc.so.6
#2  0x0000561349d8725a in System ()
#3  0x0000561349d90a0b in AbortServer ()
#4  0x0000561349d90f2b in FatalError ()
#5  0x0000561349d83576 in OsSigHandler ()
#6  0x00007fcf045266a0 in <signal handler called> () at /usr/lib/libc.so.6
#7  0x00007fcf04575310 in free () at /usr/lib/libc.so.6
#8  0x00007fcf02fc2f04 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#9  0x00007fcf0347b209 in  () at /usr/lib/xorg/modules/drivers/nvidia_drv.so
#10 0x000056134a15cae0 in  ()
#11 0x000056134b5a8760 in  ()
#12 0x000056134a392d00 in  ()
#13 0x000056134ab28dd0 in  ()
#14 0x000056134ab28f50 in  ()
#15 0x000056134a96db60 in  ()
#16 0x0000000000000000 in  ()

and the X logs (verbosity 9):

// first "setprovideroutputsource"
[  4734.301] (II) modeset(0): EDID vendor "BOE", prod id 2125
[  4734.301] (II) modeset(0): Printing DDC gathered Modelines:
[  4734.301] (II) modeset(0): Modeline "1920x1080"x0.0  342.05  1920 2028 2076 2080  1080 1090 1100 1142 +hsync -vsync (164.4 kHz eP)
[  4734.303] (II) modeset(0): EDID vendor "BOE", prod id 2125
[  4734.303] (II) modeset(0): Printing DDC gathered Modelines:
[  4734.303] (II) modeset(0): Modeline "1920x1080"x0.0  342.05  1920 2028 2076 2080  1080 1090 1100 1142 +hsync -vsync (164.4 kHz eP)
[  4734.303] (--) NVIDIA(GPU-0): DFP-0: disconnected
[  4734.303] (--) NVIDIA(GPU-0): DFP-0: Internal DisplayPort
[  4734.303] (--) NVIDIA(GPU-0): DFP-0: DFP is not internal to notebook
[  4734.303] (--) NVIDIA(GPU-0): DFP-0 Name Aliases:
[  4734.303] (--) NVIDIA(GPU-0):   DFP
[  4734.303] (--) NVIDIA(GPU-0):   DFP-0
[  4734.303] (--) NVIDIA(GPU-0):   DPY-0
[  4734.303] (--) NVIDIA(GPU-0):   DP-1-0
[  4734.303] (--) NVIDIA(GPU-0):   DP-1-0
[  4734.303] (--) NVIDIA(GPU-0):   Connector-1
[  4734.303] (--) NVIDIA(GPU-0): DFP-0: 2660.0 MHz maximum pixel clock
[  4734.303] (--) NVIDIA(GPU-0): 
[  4734.304] (--) NVIDIA(GPU-0): DFP-1: disconnected
[  4734.304] (--) NVIDIA(GPU-0): DFP-1: Internal TMDS
[  4734.304] (--) NVIDIA(GPU-0): DFP-1: DFP is not internal to notebook
[  4734.304] (--) NVIDIA(GPU-0): DFP-1 Name Aliases:
[  4734.304] (--) NVIDIA(GPU-0):   DFP
[  4734.304] (--) NVIDIA(GPU-0):   DFP-1
[  4734.304] (--) NVIDIA(GPU-0):   DPY-1
[  4734.304] (--) NVIDIA(GPU-0):   DP-1-1
[  4734.304] (--) NVIDIA(GPU-0):   DP-1-1
[  4734.304] (--) NVIDIA(GPU-0):   Connector-1
[  4734.304] (--) NVIDIA(GPU-0): DFP-1: 165.0 MHz maximum pixel clock
[  4734.304] (--) NVIDIA(GPU-0): 
[  4734.304] (--) NVIDIA(GPU-0): DFP-2: disconnected
[  4734.304] (--) NVIDIA(GPU-0): DFP-2: Internal DisplayPort
[  4734.304] (--) NVIDIA(GPU-0): DFP-2: DFP is not internal to notebook
[  4734.304] (--) NVIDIA(GPU-0): DFP-2 Name Aliases:
[  4734.304] (--) NVIDIA(GPU-0):   DFP
[  4734.304] (--) NVIDIA(GPU-0):   DFP-2
[  4734.304] (--) NVIDIA(GPU-0):   DPY-2
[  4734.304] (--) NVIDIA(GPU-0):   DP-1-2
[  4734.304] (--) NVIDIA(GPU-0):   DP-1-2
[  4734.304] (--) NVIDIA(GPU-0):   Connector-2
[  4734.304] (--) NVIDIA(GPU-0): DFP-2: 2660.0 MHz maximum pixel clock
[  4734.304] (--) NVIDIA(GPU-0): 
[  4734.304] (--) NVIDIA(GPU-0): DFP-3: disconnected
[  4734.304] (--) NVIDIA(GPU-0): DFP-3: Internal TMDS
[  4734.304] (--) NVIDIA(GPU-0): DFP-3: DFP is not internal to notebook
[  4734.304] (--) NVIDIA(GPU-0): DFP-3 Name Aliases:
[  4734.304] (--) NVIDIA(GPU-0):   DFP
[  4734.304] (--) NVIDIA(GPU-0):   DFP-3
[  4734.304] (--) NVIDIA(GPU-0):   DPY-3
[  4734.304] (--) NVIDIA(GPU-0):   DP-1-3
[  4734.304] (--) NVIDIA(GPU-0):   DP-1-3
[  4734.304] (--) NVIDIA(GPU-0):   Connector-2
[  4734.304] (--) NVIDIA(GPU-0): DFP-3: 165.0 MHz maximum pixel clock
[  4734.304] (--) NVIDIA(GPU-0): 
[  4734.304] (--) NVIDIA(GPU-0): DFP-4: disconnected
[  4734.304] (--) NVIDIA(GPU-0): DFP-4: Internal TMDS
[  4734.304] (--) NVIDIA(GPU-0): DFP-4: DFP is not internal to notebook
[  4734.304] (--) NVIDIA(GPU-0): DFP-4 Name Aliases:
[  4734.304] (--) NVIDIA(GPU-0):   DFP
[  4734.304] (--) NVIDIA(GPU-0):   DFP-4
[  4734.304] (--) NVIDIA(GPU-0):   DPY-4
[  4734.304] (--) NVIDIA(GPU-0):   HDMI-1-0
[  4734.304] (--) NVIDIA(GPU-0):   HDMI-1-0
[  4734.304] (--) NVIDIA(GPU-0):   Connector-0
[  4734.304] (--) NVIDIA(GPU-0): DFP-4: 165.0 MHz maximum pixel clock
[  4734.304] (--) NVIDIA(GPU-0): 
[  4734.306] (II) modeset(0): EDID vendor "BOE", prod id 2125
[  4734.306] (II) modeset(0): Printing DDC gathered Modelines:
[  4734.306] (II) modeset(0): Modeline "1920x1080"x0.0  342.05  1920 2028 2076 2080  1080 1090 1100 1142 +hsync -vsync (164.4 kHz eP)
[  4735.348] (II) NVIDIA(G0): NoScanout X screen configured with resolution 640x480
[  4735.348] (II) NVIDIA(G0):     (default)
[  4735.349] (II) NVIDIA(G0): Setting mode "NULL"
[  4735.356] (II) modeset(0): EDID vendor "BOE", prod id 2125
[  4735.356] (II) modeset(0): Printing DDC gathered Modelines:
[  4735.356] (II) modeset(0): Modeline "1920x1080"x0.0  342.05  1920 2028 2076 2080  1080 1090 1100 1142 +hsync -vsync (164.4 kHz eP)
// second "setprovideroutputsource"
[  4797.544] (II) NVIDIA(G0): NoScanout X screen configured with resolution 640x480
[  4797.544] (II) NVIDIA(G0):     (default)
[  4797.545] (II) NVIDIA(G0): Setting mode "NULL"
[  4800.254] (EE) 
[  4800.254] (EE) Backtrace:
[  4800.254] (EE) 0: /usr/bin/X (xorg_backtrace+0xc0) [0x561349d7eed0]
[  4800.254] (EE) 1: /usr/bin/X (0x561349bc5000+0x1be4bd) [0x561349d834bd]
[  4800.254] (EE) 2: /usr/lib/libc.so.6 (0x7fcf044e9000+0x3d6a0) [0x7fcf045266a0]
[  4800.254] (EE) 3: /usr/lib/libc.so.6 (cfree+0x20) [0x7fcf04575310]
[  4800.254] (EE) 4: /usr/lib/xorg/modules/drivers/nvidia_drv.so (0x7fcf02f48000+0x7af04) [0x7fcf02fc2f04]
[  4800.254] (EE) 5: /usr/lib/xorg/modules/drivers/nvidia_drv.so (0x7fcf02f48000+0x533209) [0x7fcf0347b209]
[  4800.254] (EE) 
[  4800.254] (EE) Segmentation fault at address 0x0
[  4800.254] (EE) 
Fatal server error:
[  4800.254] (EE) Caught signal 11 (Segmentation fault). Server aborting

My theory is that the nvidia X driver wants to free something that it has already freed - I assume - the first time, and depending on what the memory layout happens to be, either glibc can catch it (SIGABRT + double free warning), or it cannot (SIGSEGV + SIGABRT). But I can only guess. Honestly, I cannot understand why the outputs don’t disappear after the xrandr --setprovideroutputsource NVIDIA-G0 0x0 is run.

Very interestingly, one time I traced Xorg with ltrace (ltrace -C -f -S -t -p $(pidof Xorg)), and for some perculiar reason, the “setprovideroutputsource” command ran without error twice, and the outputs of the nvidia GPU did disappear. Alas, they returned after reboot.

Another that may be interesting: using 440.100 (but not 450.57 or 450.66), nvidia-smi reports corrupted infoROM after the nvidia GPU goes to sleep, but not before. Someone else who has the exact same laptop that I do confirmed that the same warning is printed using 440.100 on their machine as well (but not with 450.66). (so I hope it was a bug in the 440.100 series drivers)

As far as I see, the kernel driver doesn’t complain about anything, there are no warnings/errors.

I’ll gladly provide more logs/traces if needed. Thank you for reading this far.

Running a Dell Precision 7530, with an Intel UHD 630 iGPU, and NVIDIA Quadro RTX 5000 Max-Q dGPU. I have a rather unusual display output layout:

This image itself taken from this technical page (especially top left corner, detailing ‘DGFF card’)

I observe the same issues as everyone else:

  1. Setting the monitor connected to the NVIDIA GPU as the only display (aka NVIDIA-G0 sink) causes extreme compositing lag; however, the cursor works at full refresh rate. This is unusable. Clearly the reverse PRIME implementation needs work.

  2. Mirroring/extending the laptop and NVIDIA-G0 sink displays mitigates this problem somewhat, but there is still obvious lag and latency. My 144 Hz LG 27GL83-A display with G-Sync can only run at 75 Hz in this mode, and there is obvious tearing, etc.

A great new feature, but the implementation leaves a lot to be desired.

It is to be noted that my notebook (as shown in the image above) allows the dGPU to run alone, and in this case, KDE is composited perfectly, and the driver even detects and enables the G-Sync capability.

@aplattner, @amrits, @agoins

I try to refrain from it, and I’m sorry for pinging, but I’d really appreciate any input regarding my issue described in a previous comment.

@SRSR333
The compositing lag you observe when you only have a Reverse PRIME display is caused by a limitation in X Present, it can’t sync to PRIME sinks. It works with NVIDIA-based PRIME Sync (non-reverse) because the NVIDIA driver implements its own mechanism to allow vsync, but when the NVIDIA GPU is the sink, as in Reverse PRIME, we have to rely on what the server supports. There is some work being done upstream on this: Sync present to slave outputs (!460) · Merge requests · xorg / xserver · GitLab. Once that is implemented, some further work will be required on the NVIDIA side to support it. Unfortunately, the only current workaround is to have a non-Reverse PRIME display set as the primary in RandR, or to disable vsync.

@pobrn
Thanks for reporting that issue, I was able to reproduce it and a fix is in progress, tracked under internal bug number 3115486.