Nvidia, please get it together with external monitors on Wayland

here’s my details and a video that describes my issue. And also I couldn’t disable the gspFirmware with recent 580 drivers. I hope this help to you to understand the issue.

             .',;::::;,'.                 gny@fedora
         .';:cccccccccccc:;,.             ----------
      .;cccccccccccccccccccccc;.          OS: Fedora Linux 42 (KDE Plasma Desktop Edition) x86_4
    .:cccccccccccccccccccccccccc:.        Host: Dell G15 5511
  .;ccccccccccccc;.:dddl:.;ccccccc;.      Kernel: Linux 6.16.8-200.fc42.x86_64
 .:ccccccccccccc;OWMKOOXMWd;ccccccc:.     Uptime: 12 mins
.:ccccccccccccc;KMMc;cc;xMMc;ccccccc:.    Packages: 2423 (rpm)
,cccccccccccccc;MMM.;cc;;WW:;cccccccc,    Shell: bash 5.2.37
:cccccccccccccc;MMM.;cccccccccccccccc:    Display (MSI G241): 1920x1080 @ 144 Hz in 24" [Extern]
:ccccccc;oxOOOo;MMM000k.;cccccccccccc:    DE: KDE Plasma 6.4.5
cccccc;0MMKxdd:;MMMkddc.;cccccccccccc;    WM: KWin (Wayland)
ccccc;XMO';cccc;MMM.;cccccccccccccccc'    WM Theme: Breeze
ccccc;MMo;ccccc;MMW.;ccccccccccccccc;     Theme: Breeze (Dark) [Qt], Breeze [GTK3/4]
ccccc;0MNc.ccc.xMMd;ccccccccccccccc;      Icons: breeze-dark [Qt], breeze-dark [GTK3/4]
cccccc;dNMWXXXWM0:;cccccccccccccc:,       Font: Noto Sans (10pt) [Qt], Noto Sans (10pt) [GTK3/4]
cccccccc;.:odl:.;cccccccccccccc:,.        Cursor: breeze (24px)
ccccccccccccccccccccccccccccc:'.          Terminal: konsole 25.8.1
:ccccccccccccccccccccccc:;,..             CPU: 11th Gen Intel(R) Core(TM) i7-11800H (16) @ 4.60z
 ':cccccccccccccccc::;,.                  GPU 1: NVIDIA GeForce RTX 3060 Mobile / Max-Q [Discre]
                                          GPU 2: Intel UHD Graphics @ 1.45 GHz [Integrated]
                                          Memory: 4.24 GiB / 31.06 GiB (14%)
                                          Swap: 0 B / 8.00 GiB (0%)
                                          Disk (/): 13.81 GiB / 58.59 GiB (24%) - btrfs
                                          Local IP (enp46s0): 192.168.2.18/24
                                          Battery (DELL 70N2F95): 100% [AC Connected]
                                          Locale: en_GB.UTF-8

nvidia-bug-report.log.gz (430.2 KB)

System: Acer Nitro ANV15-52 (i5-13420H / RTX 4060 Max-Q / Intel UHD hybrid, no MUX)
OS: CachyOS (Arch-based, Linux 6.17.1-2-cachyos)
Desktop: KDE Plasma 6.4.5 on Wayland and X11
Driver: NVIDIA 580.95.05 (proprietary DKMS, GSP Firmware Version : N/A)
Monitor: LG OLED C2 3840×2160 @ 120 Hz via HDMI

The issue is persistent and severe: under Wayland the entire KDE UI runs like ~30 FPS on the external monitor, even though it reports 119 Hz. Mouse movement and window dragging are visibly delayed, while the internal panel remains smooth. (EDIT: Mouse movement looks like it’s running at the 119hz refresh rate, but if I move any window, the animations feels like bellow 30 fps., Also, trying to launch anything on Wayland using only my external monitor/TV makes everything dragging, like a huge latency in the mouse, 30 fps or low, etc. I cound’tt even play donkey kong on retro arch for instance. :c)

Under X11, everything (including games through Steam + Proton) runs fine at 120 Hz — no input lag, no frame drops.

What I’ve tested:

  • nvidia-open → same lag.

  • Switched to nvidia-dkms (proprietary) + NVreg_EnableGpuFirmware=0 → no change.

  • Tried env vars (KWIN_EXPLICIT_SYNC=1, KWIN_DRM_USE_EGL_STREAMS=0, __GLX_VENDOR_LIBRARY_NAME=nvidia).

  • Used manual KWIN_DRM_DEVICES mapping and auto-selection scripts → no effect.

  • Different CachyOS kernels (LTS and mainline).

The result is always the same: Wayland + external monitor = heavy stutter, X11 = fine. This makes Wayland unusable for anyone who primarily works on an external display.

Relevant system output:

❯ nvidia-smi -q | grep -E “Driver|CUDA|GSP”
Driver Version : 580.95.05
CUDA Version : 13.0
Driver Model
GSP Firmware Version : N/A

cat /etc/os-release | grep -E “NAME|VERSION”
Linux zen-a 6.17.1-2-cachyos #1 SMP PREEMPT_DYNAMIC Mon, 06 Oct 2025 23:26:58 +0000 x86_64 GNU/Linux
NAME=“CachyOS Linux”
PRETTY_NAME=“CachyOS”

0000:00:02.0 VGA compatible controller: Intel Corporation Raptor Lake-P [UHD Graphics] (rev 04)
Subsystem: Acer Incorporated [ALI] Device 171e
Kernel driver in use: i915
Kernel modules: i915, xe

0000:01:00.0 VGA compatible controller: NVIDIA Corporation AD107M [GeForce RTX 4060 Max-Q / Mobile] (rev a1)
Subsystem: Acer Incorporated [ALI] Device 171e
Kernel driver in use: nvidia
Kernel modules: nouveau, nvidia_drm, nvidia

plasmashell 6.4.5
QThreadStorage: entry 8 destroyed before end of thread 0x55d7c9ba9ae0
QThreadStorage: entry 3 destroyed before end of thread 0x55d7c9ba9ae0
QThreadStorage: entry 2 destroyed before end of thread 0x55d7c9ba9ae0

kwin 6.4.5
QThreadStorage: entry 8 destroyed before end of thread 0x55e1b2551630
QThreadStorage: entry 1 destroyed before end of thread 0x55e1b2551630
QThreadStorage: entry 0 destroyed before end of thread 0x55e1b2551630

Monitors: 1
0: +*HDMI-0 3840/1600x2160/900+0+0 HDMI-0

❯ ls -l /dev/dri/by-path/
lrwxrwxrwx - root 12 out 02:19  pci-0000:00:02.0-card → ../card1
lrwxrwxrwx - root 12 out 02:19  pci-0000:00:02.0-render → ../renderD128
lrwxrwxrwx - root 12 out 02:19  pci-0000:01:00.0-card → ../card0
lrwxrwxrwx - root 12 out 02:19  pci-0000:01:00.0-render → ../renderD129

[ 4.515892] input: HDA NVidia HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:06.0/0000:01:00.1/sound/card0/input21
[ 4.515945] input: HDA NVidia HDMI/DP,pcm=7 as /devices/pci0000:00/0000:00:06.0/0000:01:00.1/sound/card0/input22
[ 4.515991] input: HDA NVidia HDMI/DP,pcm=8 as /devices/pci0000:00/0000:00:06.0/0000:01:00.1/sound/card0/input23
[ 4.516027] input: HDA NVidia HDMI/DP,pcm=9 as /devices/pci0000:00/0000:00:06.0/0000:01:00.1/sound/card0/input24
[ 5.476607] nvidia: loading out-of-tree module taints kernel.
[ 5.476614] nvidia: module license ‘NVIDIA’ taints kernel.
[ 5.476617] nvidia: module license taints kernel.
[ 5.734853] nvidia-nvlink: Nvlink Core is being initialized, major device number 511
[ 5.741560] nvidia 0000:01:00.0: enabling device (0006 → 0007)
[ 5.741741] nvidia 0000:01:00.0: vgaarb: VGA decodes changed: olddecodes=io+mem,decodes=none:owns=none
[ 5.791587] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 580.95.05 Tue Sep 23 10:11:16 UTC 2025
[ 5.815163] nvidia-modeset: Loading NVIDIA Kernel Mode Setting Driver for UNIX platforms 580.95.05 Tue Sep 23 09:41:17 UTC 2025
[ 5.952519] nvidia_uvm: module uses symbols nvUvmInterfaceUnsetPageDirectory from proprietary module nvidia, inheriting taint.
[ 6.602336] [drm] [nvidia-drm] [GPU ID 0x00000100] Loading driver
[ 6.764921] [drm] Initialized nvidia-drm 0.0.0 for 0000:01:00.0 on minor 0
[ 6.892089] nvidia 0000:01:00.0: [drm] fb1: nvidia-drmdrmfb frame buffer device

This appears identical to the PRIME external-monitor lag others report on hybrid laptops without a MUX.
Could NVIDIA confirm if fixes for these setups will land in the next driver line (590+)?

Happy to collect verbose logs if needed.

So I recently swapped back over to CachyOS myself and have been having the same issues. I assume for now it’s better to just swap to X11 instead of staying on Wayland for the time being?

Using GLX gears, my monitor is running at the correct refresh rate, but anything moving around on the external monitor is noticeably delayed. I’ve tried messing with display configs & power profiles but with no luck.

Currently installed Nvidia Driver version: 580.95.05

I’ve attached my specs below in-case that would be of aid.

So, it seems we have almost the issue. It is sad because switching to X11 we lost so many cool features, like HDR.

But then I think that maybe it’s the best to stay on X11 for now…

Hi @abchauhan

Any news on this? External monitors are unacceptably laggy on my Optimus laptop with an RTX 2070 when using Wayland. It’s extremely inconvenient.

Having just installed Ubuntu 25.10, I find there is no way to switch to X from Wayland,.

In Windows, everything worked fine. With the Nvidia linux driver, I can’t go above 30fps on the HDMI connection, even with no other monitor connected.

I have tried a sudo apt-get upgrade -y. The suggested “switch to X11” option on login is not available any more.

nvidia-bug-report.log.gz (532.9 KB)

I should also add that it allows me to set 60fps @ 1080p

Hi,

Sorry it took me so long to respond

Can you please run this test again and capture a NVIDIA bug report during the test? Please attach it here.

I looked into it but I’m not comfortable with the amount of PII that it shares, I’d have to heavily scrub it, which is not simple since the file is massive and the process is manual.

What applications are you using for testing? Can you share your steps to reproduce the issue.

any and all applications that have a toggle for vsync, such as valve’s dota2. Nevertheless, as the many reports abounding in this thread, it really doesn’t take an specific application to observe this.

This weekend, I’ll capture cpu perf data of dota2 running under the two possible modes, getting half fps on the external monitor and getting full fps on the external monitor when the kde compositor is also running on the dgpu (as described in my previous post) and upload the cpu samples here. Maybe a diff between the two will help elucidate what’s going on.

@abchauhan here are the perf samples captures. One with the compositor running on intel (the problematic one), and one with the compositor running on nvidia.
Both tests were conducted on the same scene, paused, in a replay of dota 2, with the same graphics settings, v-sync on, and the game’s window full-screen on the external monitor, which is connected via displayport. Furthermore, the dota process was isolated to cpus using taskset, so no other process would cause scheduling contention.
The samples were captured with

sudo perf record -F 99 -p <pid> -g -o <file-name>.data and processed into viewable format using perf script.

I suggest you check the profiling information using a viewer such as firefox profiler, since it’ll let you dive into individual threads and make flamegraphs for you.

From bird-eye view, you immediately spot that in both cases _sched_yield is what takes up the most time in the vulkan rendered thread, but in case of intel, you’ll only find 354 samples in all 26 seconds, while on nvidia, you can find 654 samples in 22 seconds, almost twice as much, which means latency is much more reduced and the thread gets to do more stuff at the right time, instead of waiting.

dotaperf-compositor-.zip (973.5 KB)

You don’t experience VRAM memory leak over time? I did this in the past, and it is idd helps, but over time (3-4 hours) later, my 3060’s 6Gb is fully saturated to the point that UI rendering falls apart, with visual glitches, stuttering. Basically unusable.

Half a year later, CPU usage is still high on Wayland when using HDMI or DisplayPort. All is good with Nouveau or with X11, so it’s just Nvidia + Wayland.

On Labwc, usage jumps to almost 100%. On GNOME and Plasma, it’s from 15 to 30% as soon as I do something.

Considering that DEs have abandoned X11, System76 in primis with Pop!OS and COSMIC, future for us is terrible.

Yep..I just installed the 590 driver, logged into the plasma wayland session and, desktop/games has lower framerate. Even when just using the builtin laptop screen.

Games drop to 40 frames.

Logged back to X11 → 240 frames on desktop using external monitor.

At this point, I’m going to switch to AMD.

nvidia-bug-report.log.gz (889.0 KB)

On KDE and GNOME, forcing the compositor to use the Nvidia GPU for rendering fixed things for me. For some reason it was using the iGPU instead, even though my HDMI is wired to the Nvidia card. My laptop doesn’t have a mux, so I had to use env var on KDE and udev rule on GNOME.

For KDE:
KWIN_DRM_DEVICES="/dev/dri/by-path/<nvidia-first>:<igpu-next>"
Order matters. Whatever you put first is what it renders with.

For GNOME:
ENV{DEVNAME}=="/dev/dri/cardX", TAG+="mutter-device-preferred-primary"

I’m on Plasma 6.5.4 with Wayland and it’s been totally fine. If anyone needs a step-by-step guide feel free to check this out: Solving External Monitor Lag on Linux

1 Like

Thank you very much for the solution! If you don’t mind, I’ll share your link on https://github.com/NVIDIA/open-gpu-kernel-modules/issues/650.

Do you think the KDE and GNOME developers are aware of this issue?

It is just a workaround rather than solution itself but feel free to share it. Did it make it better for you?

And about your question, i don’t know about gnome but on kde there is this bug report you can out https://bugs.kde.org/show_bug.cgi?id=452219

Also, the link you have on your github comment is messed up.

This definitely improved the situation, even though it didn’t fully resolve the issue.

I’ve fixed the link — my apologies!

1 Like

I tried the above latest solution but came up with another error where the main laptop screen freezes entirely when plugged out of HDMI cable. Tried writing a sh to detect whenever it would disconnect HDMI so the igpu would be enabled. But then i came up with another error. GDM only shows the lock screen in laptop and refuses to work in external monitor. I had to ditch linux again just because of this issue. I would ditch nvidia entirely if i had the money.

One thing, igpu is never disabled it’s just that everything is rendered using nvidia gpu, internal screen included so it just stays idle. About the sh script, I don’t know how you tried to implement it but I don’t think it will work as reloading and re-triggering the udev rules didn’t work for me. It required a full restart.

Also, I don’t know much about GDM issues, I mainly use KDE with sddm and for me everything is working fine on KDE, unplugging the hdmi, sleep resume, lock screen etc.

This issue really needs a real fix soon. If you feel like it maybe you can try using kde or if your laptop supports thunderbolt then usb-c video output may work as well.

Bro, you just save me. I can’t thank you enough.

Now I can use HDR.

BTW: Plasma Desktop animations LIKE when we zoom out the desktop using super + w, are still running in low framerate.

But windows animations, like Maximize and minimize are running smooth.

To be fair, the games seems to be running better on wayland after all.

Remember guys: in Plasma Display Settings → Adaptative Sync → Automatic. Before setting this to automatic, the performance was not so good.

THANK YOU MAN!

1 Like

to be clear, the reason why @lattisse, and other call this a ‘workaround’ rather than a ‘fix’ is that running everything on the dGPU rather than the iGPU is highly resource inefficient (horrible) for your battery.

if you’re only going to be usng the dGPU for gaming, i’d suggest running games from the display manager using gamescope.

requires a logout/login but infinitely more tolerable than using nvidia multimonitor on wayland.

Just dropping in to keep it clear that while people have found workarounds, this is issue is very much still present.

In Plasma 6.5.4, using the nvidia 590.48.01-6 on arch, if using the AMD GPU as the primary video renderer, the performance using a secondary monitor on a port connected on the Nvidia GPU still has poor performance. This happens regardless of the power state of the GPU.

The only workaround is to set the Nvidia GPU as the primary renderer, which tanks the battery life and makes it annoying to switch from one to the other with custom scripts if you need to switch from using a workstation to carrying around your laptop.