Ubuntu 20.04 - not able to run 6/7 monitor setup with one X screen

Hello there!

I’m trying to make a setup with in total 7 monitors (one monitor gets mirrored with the sixth monitor) run on my machine with Ubuntu 20.04 on it. I am using two GTX980TI (Windforce) with an SLI cable and already tied a few different scenarios (all configured by nvidia-xconfig). Its using standard X11 with GDM on it.

Since this is a dual boot, the same hardware setup is running without any problems on Windows 10 with SLI activated in 6 and 7 monitor mode (seventh monitor is not always connected).

From hardware perspective, both cards having all three displayport in use. The second card powering the upper row of monitors powering the seventh monitor (attached by HDMI) that will get mirrored with one of the displayport connected monitors.

Card 0 - main:
-> DP0, DP1, DP2
Card 1 - secondary:
-> DP0, DP1, DP2, HDMI0

So far, I tried some different possibilities, starting with a second screen configuration next to the main screen configuration, which does not really work great on GDM at all. But at least all screens get initialized with an unused X screen session.

Otherwise I tried using Xinerama, with came out as disaster with everything I tied. It was not working at all and always ended in an black screen with mouse pointer on it.

With use of Base Mosaic (tried with driver 470, 495, 460, 450, 390) I was finally able to use more than 3 monitors with one X screen, but it was only able to let me activate 5 monitors with the error:

• MetaMode 1 of Screen 0 has more than 5 active display devices.

I did not tried this with the seventh monitor attached, but the sixth monitor will already automatically be turned off at this point.

As extra from all that I also tried when downgrading the driver to use legacy SLI/MultiGPU modes as described in docs, but that came out to only work with one monitor at all and not really well.

Since I already wasted a lot of time with this topic, I wonder if there is some limitation that wont let me pass on this.

Is there some way to make it work with Base Mosaic on this specific hardware?

1 Like

BaseMosaic on Geforce type cards is limited to 5 monitors on linux, more only supported with Quadros.
Please check this thread:

  • delete xorg.conf
  • set kernel parameter nvidia-drm.modeset=1
  • check two nvidia providers are avalable
  • run xrandr --setprovideroutputsource NVIDIA-G0 NVIDIA-0 && xrandr --auto

Thank you very much generix! That actually does the Job great.

When using GDM, the last command (run xrandr --setprovideroutputsource NVIDIA-G0 NVIDIA-0 && xrandr --auto) is not even needed, as it will automatically detect what is there (at least when you do a full reboot, what I did).

Additionally, when using GDM:
When setting everything up with xrandr, you will notice that things get lost on reboot. Too avoid that make sure to open the display configuration with the included gnome GUI display configuration tool and try to enforce saving display configuration again. I did this, by moving one display, saving, and moving it back. After that, you have to save the config on global level with the following command. When setting up everything with the included GUI tool only, you just need to run the command after it.

sudo cp ~/.config/monitors.xml /var/lib/gdm3/.config/

When it comes to graphic RAM used I will see how it turns out. I hope at least some applications like blender are able to use the other graphic cards RAM to avoid too much usage of the first one. Otherwise I also have to think about upgrading to 12GB+ graphics… But at least its not too much yet with my 4x2560x1440 + 2/3x1920x1200 setup.

Question to that: Is it planed for the PRIME functionality to support both graphics cards RAM in the future on Linux in some way?

With prime, the primary gpu will always have to keep the whole desktop in its memory to be continuous, for being able to move windows from monitor to monitor. I don’t think this will change with Xorg, don’t know about Wayland, though.
With cuda (blender), there shouldn’t be an issue, it will be using both gpus and their memory per default or the ones set with CUDA_VISIBLE_DEVICES, e.g. running desktop on the first gpu and using only the secondary for cuda.
Since the secondary gpu is in offload mode, in theory it should be possible to explicitly set the gpu to use for applications by setting the provider to use:
in practice, this didn’t work the last time another user tried, shouldn’t keep you from checking if that changed meanwhile.

Alright, thank you for your answer, lets see how things turn out.

With blender this is good! Right, I will just configure it in this way.

For the link you just send I tried things out, but it does not seem to work for me.

# xrandr --listproviders
Providers: number : 2
Provider 0: id: 0x1b8 cap: 0x1, Source Output crtcs: 4 outputs: 9 associated providers: 1 name:NVIDIA-0
Provider 1: id: 0x35d cap: 0x2, Sink Output crtcs: 4 outputs: 9 associated providers: 1 name:NVIDIA-G0

Still nvidia-smi shows me:

| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|    0   N/A  N/A      4150      G   /usr/lib/xorg/Xorg                298MiB |
|    0   N/A  N/A      5290      G   /usr/lib/xorg/Xorg                984MiB |
|    0   N/A  N/A    219556    C+G   vkcube                              6MiB |
|    1   N/A  N/A      4150      G   /usr/lib/xorg/Xorg                 44MiB |
|    1   N/A  N/A      5290      G   /usr/lib/xorg/Xorg                147MiB |
# __NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia glxinfo | grep vendor
server glx vendor string: NVIDIA Corporation
client glx vendor string: NVIDIA Corporation
OpenGL vendor string: NVIDIA Corporation

Since it still runs on GPU0 from what I see there, so it does not seem to work for some reason.
Still I am quite happy with the current solution, maybe this will work out some day.

It should be this, explicitly setting the provider for rendering:

Yes, this changes something.

The whole environment is crashing. First graphic card(0) all monitors are turning off and some console pointer (underscore) coming in, looks like whole screen session crashed while upper three monitors are still active. After around seconds login screen is shown and all monitors behave normal. But GDM is not stable in this state and login will fail, I required to manually restart it by opening terminal and restart the service. After that it worked again.

I did tried both, same outcome:


Only thing I found so far about logging is in dmesg really prominent, multiple times in row:

[drm:nv_drm_master_set [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000200] Failed to grab modeset ownership
[drm:nv_drm_master_set [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to grab modeset ownership

Is there anything else I can look into? Or do you think this thing will not be able to run yet that easily?

Interesting, shouldn’t crash the xserver. Which driver version?
pls try


That comes out with the same sadly. Its the 495 I am currently using.

Not nice. Please recreate the crash, then create a new nvidia-bug-report.log and send it to linux-bugs[at]nvidia.com with a description of the bug (Xserver crash on render offload) and reproduction steps. If possible, also try with the new 510 driver. Maybe that will help making it work some time later.

Alright, I just filled a mail to to that email address, I hope it helps.

I will try out that driver.

Hello there!

It took me a while to take a look again on this and switch to a newer version driver, but I did.

Some Feedback about the 510.47.03 driver version:
When running the __NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia __NV_PRIME_RENDER_OFFLOAD_PROVIDER=NVIDIA-G0 glxgears command, still the whole GDM environment will crash with problems to restore without fully restart the service.

Otherwise the driver runs quite stable for every normal task.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.