I am getting very much unexpected massive memory bandwidth consumption and GPU memory allocation on an idle X system, in reverse PRIME configuration.
This is triggered very specifically by a 4K (3840 x 2160) resolution screen, where-as the system behaves as expected if a 2.5K display (2560 x 1440) is attached.
Good: 2.5K display == 6 MB of GPU memory, GPU is easy.
Horrible: 4k display == 134 MB of GPU memory(!), GPU is under massive memory (bandwidth) stress
For the 4K display, I’d expect at most 20 MB of GPU memory being consumed.
Are there any tools or strategies to gather more data about those 134 MB of memory allocation? Allocated for a specific purpose? Structure of allocations (many small, one chunk)? Static vs repeatedly allocated? …
FWIW, nvidia-smi attributes all of this to the Xorg process, so I gather the Nvidia driver contributes quite substantially.
Let me stress again: 2.5K screen OK, 4K screen not. How to tell? Unplug one, plug in the other …