I currently have an older GTX 960 w/ a newer GTX 1080 TI also in the machine. I have 4 monitors connected to the 1080 TI and 2 to the 960. How would I go about getting all of these to work in Linux (current distro I’m using is Mint Xfce, but that shouldn’t matter)? I’ve tried everything under the roof getting this to work. Any help would be appreciated…
The closest I’ve gotten is to try to use the NVIDIA drivers on the GTX 1080 TI, and nouveau drivers for the older one (set to “modesetting” in XRANDR). Haven’t gone the full way with this (so don’t think it’ll work… and by now I doubt it well; however, I’ve run out of options).
Alright, generix… I think I’m getting close. It’s a different turn-out so far than the original poster who you helped in the thread you specified. At the end of post #7, I have the following:
xrandr: Configure crtc 4 failed
X Error of failed request: BadValue (integer parameter out of range for operation)
Major opcode of failed request: 140 (RANDR)
Minor opcode of failed request: 21 (RRSetCrtcConfig)
Value in failed request: 0x0
Serial number of failed request: 81
Current serial number in output stream: 81
In the original thread, it looked like his issue was because his GPU in “modesetting” did not have the 0x2 in cap; however, mine does. My error return also has the line ‘xrandr: Configure crtc 4 failed’ at the top where his didn’t. Any clue?
I also had to use a different xorg.conf, as the one you posted (with my BusIDs updated to fit my GPUs) resulted in just a cursor loading in the top left of the screens. My xorg.conf is as follows:
That’s quite interesting, the fact that the modesetting driver ontop of the nvidia kernel driver now exposes the Sink Output cap looks like nvidia implements prime functionally in their drm driver. Still, it now fails in a later step.
Please attach an xorg log so I can have a look at it.
[ 13.189] (II) Applying OutputClass "nvidia" options to /dev/dri/card0
There’s probably a file /usr/share/X11/xorg.conf.d/10-nvidia-drm-outputclass.conf containing an nvidia OutputClass, please remove that file, it loads the nvidia driver additionally to the modesetting driver for the 960:
[ 13.580] (EE) NVIDIA(GPU-1): Failed to acquire modesetting permission.
[ 13.580] (II) NVIDIA(GPU-1): NVIDIA GPU GeForce GTX 960 (GM206-A) at PCI:6:0:0 (GPU-1)
[ 13.580] (--) NVIDIA(GPU-1): Memory: 2097152 kBytes
[ 13.580] (--) NVIDIA(GPU-1): VideoBIOS: 84.06.0d.00.02
[ 13.580] (II) NVIDIA(GPU-1): Detected PCI Express Link width: 16X
[ 13.581] (II) NVIDIA: Using 24576.00 MB of virtual memory for indirect memory
[ 13.581] (II) NVIDIA: access.
That shouldn’t happen and is probably interfering.
When you issue the xrandr command, the modesetting driver complains:
[ 97.042] (EE) modeset(G0): failed to set mode: No space left on device
might be related to the nvidia driver being loaded.
xrandr: Configure crtc 4 failed
X Error of failed request: BadValue (integer parameter out of range for operation)
Major opcode of failed request: 140 (RANDR)
Minor opcode of failed request: 21 (RRSetCrtcConfig)
Value in failed request: 0x0
Serial number of failed request: 81
Current serial number in output stream: 81
I’ll attach the logs again. Also, I just realized… I have an Oculus Rift headset still attached the to GTX 1080 TI via the HDMI port. Would that have anything to do with any of this, since the resolution settings, etc. won’t match the monitors? Likely not the cause… just wanted to put it out there, just in case. Sent you a PM also.
“After much experimentation, enabling the “glamor” USE flag on x11-base/xorg-server fixes this. I’ve not the faintest idea as to why this fixes it. It’s especially odd, as enabling Glamor should actually break things, according to Nvidia’s own documentation”
What is your opinion, and if it’s something we can try… how do I go about doing this?
The bug report is only relevant for Gentoo users, you have to compile the xserver with glamor enabled for the modesetting driver to work. Irrelevant in your case since you’re using Ubuntu and they know how to compile the xserver;) It’s just the same error message but a different cause.
So it looks like nvidia is working on prime output sink support in their driver but currently it isn’t finished. So you got a step further than previous experiments but still not reaching the goal. You could try with just one monitor connected to each card to check if you’re hitting any unknown limits.
Should be revisited when new major driver versions get released.
So for the moment, there’s only the split nvidia/nouveau setup left.