High CPU usage on xorg when the external monitor is plugged in

As far as I can tell, this depends on the specific hardware video output path (configuration) of the device.

Case in point for my Dell Inspiron 16 plus (7610), a Tiger Lake 11800H + Nvidia 3060 RTX device which has one HDMI output and one USB-C port (which serves graphics data via Thunderbolt Alternate Mode or DisplayPort Alternate Mode).

The HDMI port is exclusively served by the Intel GPU, and the Intel GPU itself is unable to send data via USB-C

The Nvidia GPU is the only GPU that is able to generate any output through the USB-C port (and, IIRC, the Nvidia GPU is unable to output to the built-in screen of the notebook).

(I believe I am describing a MUX design here, which is the “cheap” version of doing hardware)

In that constellation, with the exact same external 4K screen attached to the device on

  • HDMI == only Intel GPU works; all good
  • USB-C (using an Alternate Mode) == Intel GPU serves built-in display; Nvidia GPU serves external screen

In the latter configuration, you will see all this massive CPU load on the Xorg process; and all this load is coming from the in-process Nvidia driver, with a massive number of calls to the LInux VDSO to get the current time.

This posting of mine shows more technical detail: Nvidia X11 driver busy-polls kernel on clock_gettime in a tight loop - this also has an nvidia-smicall to expose the problem much better: It gets a whole lot worse, i.e. much worse than the badness in the “normal” case, if you downclock the Nvidia GPU.