375 - 381.09 (980m) hangs system if external monitor is connected

Clevo P650-RG w/980m + 4k internal screen + external 2560x1080 monitor on HDMI port.

For the longest time, I’ve been using 364.19 on an Ubuntu 17.04 (alpha through beta) laptop with the BIOS video option set to DISCRETE mode (no nvidia-prime). This has served me well for a long time, with both my internal and external monitors working together. Due to recent changes in 17.04, I was having issues with that setup, so I’m trying to get the system working with a newer driver.

The first thing I did was switch the BIOS video option to MSHybrid (Optimus) mode and install nvidia-prime (and select nvidia, though it was already configured to use nvidia). Still using the 364.19 driver (+ 4.4 kernel), I was then able to use the external monitor. But, for some reason the driver would not see/use the laptop’s internal screen after starting X and insisted that only the external monitor existed (both in nvidia-settings and the Ubuntu display configuration). I verified that it was in fact using the nvidia driver via glxinfo and even played some of my regular games this way. I only mention this to show that my HDMI port is NOT wired directly to the intel chipset and has been working in both Discrete and MSHybrid mode.

At any rate, due to 364 not compiling with newer kernels, and not being compatible with the latest Ubuntu X server packages, I need to move on to a newer driver. I updated to all the latest packages and am trying to get something to work under the 4.10 kernel.

THE SYMPTOMS: Finally, here’s what I’m seeing… If I boot the system with 381.09 or any of the other most recent drivers using MSHybrid mode (w/nvidia-prime installed and prime-selecting nvidia), it will work with the internal display. BUT, the minute I plug in my external monitor, the internal screen will go black and there’s no way to get it back. Also, the external monitor never starts up. Unplugging the external monitor and attempting to switch back/forth to vt sessions does nothing. I can still ssh into the system, but I can’t end lightdm (just hangs forever). I’m attaching a bug-report that I captured while the system was in this state. Doing a ‘sudo reboot’ also never seems to complete (after it kills my ssh session), so it seems like something is really hung up in the background.

Booting with the external monitor connected, I get the black screen just the same. I tried backleveling to the 375.39 driver, since others had reported more success with it, but I see the exact same symptoms.

Any suggestions are welcome. I’ve tried many many things not even mentioned here (every grub parameter I’ve ever seen), but I’m more than happy to try specific ones to capture better data.
nvidia-bug-report.log.gz (235 KB)

Hi again.
Maybe you’re just hit by the bios acpi issue as many optimus laptop owners are:
The most common workarounds would be to use either
acpi_osi=! acpi_osi=“Windows 2009”
or
acpi_osi=“!Windows 2015”
as additional kernel parameters.
You may find further infos at the bbswitch github issue list.

Thanks - I’ll definitely give those options a try when I get home.

Since my original post I’m noticing other weirdness with the 381.09 drivers too: momentary hangs here and there tha leave a ghost cursor on the screen for the duration of the freeze, suspend totally failing to bring back X when I unsuspend, vlc and nvidia-settings causing Xorg to go to 100% CPU until it restarts itself… I’m hoping they’re somehow related to the same problem.

I spent some hours trying to go back to Discrete mode in the BIOS (like I had been using with 364), but couldn’t find any working combination of grub/xorg configs that get me to a point where the nvidia driver will even login. :\

I’ll let you know how it turns out!

Intermitting hangs, ghost cursor and X at 100% cpu might be related to prime sync, does it vanish when you start without nvidia-drm.modeset=1 ?
Hang on resume from suspend sounds like the mentioned acpi issue.

does it vanish when you start without nvidia-drm.modeset=1

Unfortunately not. I don’t really see any difference between nvidia-drm.modeset=1, nvidia-drm.modeset=0, or leaving it out alltogether. The only kernel parameter that seems to ever have any effect is nomodeset, and if I set that one the nvidia driver will not start X at all.

I tried the acpi_osi settings, but none of them appear to change any of the symptoms in either direction.

I’ve looked at Sager and Clevo’s sites too, but they’ve release no BIOSes since the release of the P650-RG (or ever, for that matter).

That’s really a mixed bag of problems you have there.
Maybe rule ot acpi problems first. For that, you have to switch to intel, remove all acpi_osi entries, reboot. Then, see if you can turn your nvidia gpu on and off using bbswitch. The nvidia modules should be unloaded, check with
lsmod
check power state of dGPU:
cat /proc/acpi/bbswitch
should be OFF
lspci -vvvs
does that hang?
turn dgpu on
echo ON > /proc/acpi/bbswitch
cat /proc/acpi/bbswitch
should be ON, check again
lspci -vvvs
does that hang? Does output look strange?
turn it OFF, check lspci.

Overall, for debugging, maybe use a more conservative setup, kernel 4.4 nvidia-driver-378 remove nvidia-drm.mode=1.

Overall, for debugging, maybe use a more conservative setup, kernel 4.4 nvidia-driver-378 remove nvidia-drm.mode=1.

Looks like I’ll have to keep trying with kernel 4.10, because this is what I saw this morning:

Configuration: Fully patched Ubuntu 17.04, BIOS in MSHybrid mode, Kernel 4.4, nvidia* packages purged, xorg.conf deleted, nothing in kernel params except the lines blacklisting and blocking modesetting for nouveau.

RESULTS: Boots fine into intel driver. No hangs and extra cursor ghosts. Plugging in external monitor has no effect - it neither hangs the system nor shows the second monitor as usable in Ubuntu’s display configuration.

Configuration: Same as above but with nvidia-378 installed, and ran prime-select intel.

RESULTS: Hangs the system hard immediately after submitting login credentials on lightdm screen. System is completely hung, can’t ssh into it or even REISUB reboot it. Have to power off.

I tried re-installing nvidia-378 several times, prime-selecting intel again, but get the same results. The install and select throw no errors but system always hangs after login.

Configuration: Same as above, but booting 4.10 kernel instead.

RESULTS: Logs in to intel driven desktop fine, no ghosting or hangs.

This makes it seem like something in prime does not like the 4.4 kernel when using the 378 driver.
I don’t think ever had this problem when using the 364 drivers, but I wasn’t using prime with them and was in DISCRETE mode most of that time.

06:35:53 evil@sager ~» glxinfo | grep OpenGL
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) HD Graphics 530 (Skylake GT2) 
OpenGL core profile version string: 4.5 (Core Profile) Mesa 17.0.3
...

So, continuing with your advice but with the caveat that it’s the 4.10 kernel…

The nvidia modules should be unloaded, check with lsmod
check power state of dGPU: cat /proc/acpi/bbswitch
should be OFF

06:45:53 evil@sager ~» lsmod | grep nvidia
06:47:19 evil@sager ~» cat /proc/acpi/bbswitch
0000:01:00.0 ON
07:08:15 evil@sager ~» lspci | grep VGA
00:02.0 VGA compatible controller: Intel Corporation HD Graphics 530 (rev 06)
01:00.0 VGA compatible controller: NVIDIA Corporation GM204M [GeForce GTX 980M] (rev a1)

So… before I proceed further, nvidia’s definitely not loaded but the power state doesn’t look right?

Thanks again for your assistance troubleshooting this.

BTW: I have also submitted a request to Sager in an attempt to get the latest BIOS for this system (it’s actually a Sager NP8658-S w/4k display - which are rebranded Clevo P650RG’s with a branded BIOS), in hopes that it may have some ACPI fixes.

Ran into another weird issue that might be worth mentioning. Left my system sitting for a while and ran into this problem when I got back…

Configuration: Fully patched Ubuntu 17.04, BIOS in MSHybrid mode, Kernel 4.10, nvidia-378 packages installed but have prime switched to intel, nothing in kernel params except the lines blacklisting and blocking modesetting for nouveau.

The Issue: Locked screen, walked away for about 30 minutes or so, came back and could not wake laptop. At least, screen stayed black.

The Weirdness: I eventually had to REISUB to reboot. But, system no longer booted to LightDM. The system would hang hard at the point where lightdm should have started. Had to completely power-off to reboot. Unfortunately, the problem was persistent.

During the rebootes I noticed nvidia-persistenced trying to start, and failing. Looking at the logs later I would see:

Apr 13 08:47:24 sager systemd[1]: Started ACPI event daemon.
Apr 13 08:47:24 sager systemd[1]: Starting NVIDIA Persistence Daemon...
Apr 13 08:47:24 sager systemd[1547]: nvidia-persistenced.service: Failed at step EXEC spawning /usr/bin/nvidia-persistenced: No such file or directory

Booted to recovery and did --reinstall of nvidia-378 and nvidia-prime, re-ran prime-select intel. Rebooted again… but the problem continued to persist. Double-checked to make sure nothing had created an xorg.conf or something silly like that, but found no issues.

Finally: Booted to recovery removed/purged nvidia* and rebooted again. Now it boots to LightDM with no issue. Locked screen and let it sit for a while… unlocks fine.

Prime just doesn’t seem to want to let this system run normally, even when not using the nvidia drivers. Seems really similar to what people were reporting here: https://devtalk.nvidia.com/default/topic/991853/linux/complete-freeze-with-nvidia-prime/

You can go on testing to turn off and on the dGPU using bbswitch with nvidia drivers purged. This makes sure they’re not loaded;)
If that fails, that’s another bug with acpi. It has really nothing to do with prime, it’s just the case it’s more often triggered when using prime than not.
Then, you have to move over to:
https://github.com/Bumblebee-Project/Bumblebee/issues/764
Lekensteyn is expert on that.

Edit: When you tested the acpi_osi settings I hope you removed the acpi_osi=Linux setting

When you tested the acpi_osi settings I hope you removed the acpi_osi=Linux setting

I did remove it. I was testing with only the acpi_osi entries I had seen in the bumblebee thread.

You can go on testing to turn off and on the dGPU using bbswitch with nvidia drivers purged.

/proc/acpi/bbswitch doesn’t appear to get created unless they’re present… even though I had manually installed the bbswitch-dkms package and rebooted.

I’ll try booting installing nvidia 378 drivers, prime-selecting intel, then testing with the 4.4 kernel, and do the tests from a console without logging into lightdm, assuming that’s a valid alternative.


What I’ve already tried in the past couple of hours with the 4.10 kernel:

I first re-installed the nvidia-378 drivers, nvidia-prime, prime-selected the intel drivers, and rebooted. The system still immediately hangs when lightdm loads. I’m really beginning to wonder if the one time it loaded lightdm on the first try this morning wasn’t dumb luck.

So, I purged those drivers and tried the 381 drivers again (still prime-selecting intel before reboot). Same results.

So, I purged those drivers and tried 375. Same results.

Basically, I’m stuck with several unusuable scenarios:

  • 4.10 kernel, any nvidia driver, prime-select intel = hang hard when lightdm starts.

  • 4.4 kernel, any nvidia driver, prime-select intel = lightdm loads, but system hangs hard after login.

  • 4.4 or 4.10 kernel, nvidia drivers purged = system logs in and uses intel driver fine, but I can’t use the hdmi connected external monitor in this scenario.

  • 4.4 or 4.10 kernel, any nvidia driver, prime-select nvidia = random temporary system freezes, 100% CPU if I load nvidia-settings (which never displays and system gets slower until hang or X restarts), plugging in external monitor results in both screens being black/unusable.

  • 4.4 and nvidia-364, older xserver packages, prime-select nvidia = works… but only the external HDMI monitor is seen/used if attached.

Whithout nvidia etc you have to manually modprobe bbswitch to use it.

Ahhh… thx for the info! Will try that now. I already tried the other way I mentioned in my last message, and the system hang hard when I did ‘cat /proc/acpi/bbswitch’ from the console.

BRB with a test with no nvidia drivers present.

By default, bbswitch shows mine to be on:

13:54:38 evil@sager ~» cat /proc/acpi/bbswitch 
0000:01:00.0 ON

13:54:49 evil@sager ~» lspci -vvvs 0000:01:00.0
01:00.0 VGA compatible controller: NVIDIA Corporation GM204M [GeForce GTX 980M] (rev a1) (prog-if 00 [VGA controller])
        Subsystem: CLEVO/KAPOK Computer GM204M [GeForce GTX 980M]
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0
        Interrupt: pin A routed to IRQ 16
        Region 0: Memory at de000000 (32-bit, non-prefetchable) 
        Region 1: Memory at c0000000 (64-bit, prefetchable) 
        Region 3: Memory at d0000000 (64-bit, prefetchable) 
        Region 5: I/O ports at e000 
        Expansion ROM at df000000 [disabled] 
        Capabilities: <access denied>
        Kernel modules: nvidiafb, nouveau

And here’s what I see when turning it off.

root@sager:~# echo OFF > /proc/acpi/bbswitch
root@sager:~# cat /proc/acpi/bbswitch
0000:01:00.0 OFF
root@sager:~# lspci -vvvs 0000:01:00.0
01:00.0 VGA compatible controller: NVIDIA Corporation GM204M [GeForce GTX 980M] (rev ff) (prog-if ff)
        !!! Unknown header type 7f
        Kernel modules: nvidiafb, nouveau

EDIT:
Switching it back on is not a problem (here’s output as root to show full capabilities:

root@sager:~# echo ON > /proc/acpi/bbswitch   
root@sager:~# lspci -vvvs 0000:01:00.0     
01:00.0 VGA compatible controller: NVIDIA Corporation GM204M [GeForce GTX 980M] (rev a1) (prog-if 00 [VGA controller])
        Subsystem: CLEVO/KAPOK Computer GM204M [GeForce GTX 980M]
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0
        Interrupt: pin A routed to IRQ 16
        Region 0: Memory at de000000 (32-bit, non-prefetchable) 
        Region 1: Memory at c0000000 (64-bit, prefetchable) 
        Region 3: Memory at d0000000 (64-bit, prefetchable) 
        Region 5: I/O ports at e000 
        Expansion ROM at df000000 [disabled] 
        Capabilities: [60] Power Management version 3
                Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
                Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
        Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
                Address: 0000000000000000  Data: 0000
        Capabilities: [78] Express (v2) Legacy Endpoint, MSI 00
                DevCap: MaxPayload 256 bytes, PhantFunc 0, Latency L0s unlimited, L1 <64us
                        ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset-
                DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
                        RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
                        MaxPayload 256 bytes, MaxReadReq 512 bytes
                DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend-
                LnkCap: Port #0, Speed 8GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <1us, L1 <4us
                        ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+
                        ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
                LnkSta: Speed 8GT/s, Width x16, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
                DevCap2: Completion Timeout: Range AB, TimeoutDis+, LTR+, OBFF Via message
                DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled
                LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
                         Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
                         Compliance De-emphasis: -6dB
                LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete+, EqualizationPhase1+
                         EqualizationPhase2+, EqualizationPhase3+, LinkEqualizationRequest-
        Capabilities: [100 v1] Virtual Channel
                Caps:   LPEVC=0 RefClk=100ns PATEntryBits=1
                Arb:    Fixed- WRR32- WRR64- WRR128-
                Ctrl:   ArbSelect=Fixed
                Status: InProgress-
                VC0:    Caps:   PATOffset=00 MaxTimeSlots=1 RejSnoopTrans-
                        Arb:    Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256-
                        Ctrl:   Enable+ ID=0 ArbSelect=Fixed TC/VC=ff
                        Status: NegoPending- InProgress-
        Capabilities: [250 v1] Latency Tolerance Reporting
                Max snoop latency: 0ns
                Max no snoop latency: 0ns
        Capabilities: [258 v1] L1 PM Substates
                L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2+ ASPM_L1.1+ L1_PM_Substates+
                          PortCommonModeRestoreTime=255us PortTPowerOnTime=10us
        Capabilities: [128 v1] Power Budgeting <?>
        Capabilities: [420 v2] Advanced Error Reporting
                UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
                CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-
                CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
                AERCap: First Error Pointer: 00, GenCap- CGenEn- ChkCap- ChkEn-
        Capabilities: [600 v1] Vendor Specific Information: ID=0001 Rev=1 Len=024 <?>
        Capabilities: [900 v1] #19
        Kernel modules: nvidiafb, nouveau

Looks normal, would have been too easy. But I noticed something which may be a new hint, from lspci:

LnkCtl: ASPM Disabled;
from dmesg:
ACPI FADT declares the system doesn’t support PCIe ASPM, so disable it
On a recent rig like yours, this shouldn’t be happening.
Take a look at this:
PCIe, power management, and problematic BIOSes [LWN.net]
Are there any settings in bios to tweak?

Are there any settings in bios to tweak?

Unfortunately not. This is part of the reason I sent a note to Sager yesterday asking for any newer BIOS they might have.

There are no PCIe settings in the BIOs - The chipset options only contains 5 settings, three of which are unrelated (Intel Virtualization, Combo Slot function, Flexicharger). The only two settings that are relevant are “MSYBRID or DISCRETE” and GPU Performance Scaling (which I’ve left enabled for performance).

I would think that hangs would result if ASPM was being enabled when it shouldn’t be, what’s in the logs appears to be the “safe” way of not using it at all.

Well, Sager did respond with the latest BIOS for my system. Unfortunately, it changes very little and adds no extra configuration options relevant to my issues.

Initial testing shows my symptoms remain unchanged in the different configurations.

All I can think to do now is to backlevel all my xorg packages and run nvidia-364 under kernel 4.4 (since it doesn’t compile with newer kernels). That’s the only way I might still use the nvidia chipset on my external (physically larger) monitor.

It really sucks that none of the drivers released in the last 10 months work (especially the vulkan stuff) since I bought this machine 13 months back to be a Linux gaming powerhouse. :\

At any rate, thanks for your help Generix - let me know if you think of anything else and I’ll be glad to give it a go.

You’re probably right about disabling aspm should be safe but it’s one more item on the list that the device’s acpi is severely flawed.
Then the case the dgpu was still ON when you had switched to intel though the drivers where unloaded. Normally, with ubuntu’s nvidia-prime, gpumanager would check for the current profile and while set to intel it would unload driver, turn off gpu and remove the xorg.conf. You can check that from its logfile /var/log/gpumanager.log. Then the crash while on intel (don’t know if a suspend/resume cycle was involved).
This would mean that simply initialising the dgpu by once loading and then unloading the driver makes your computer go awry. Don’t know what newer drivers are doing differently that triggers those kinds of problems.
I think you should really upload your acpi tables at https://bugs.launchpad.net/lpbugreporter/+bug/752542 and ask at the bbswitch issue list if someone can take a look at it given all your problems. Seems to be the last resort since the new bios didn’t bring any news.

I think you should really upload your acpi tables at https://bugs.launchpad.net/lpbugreporter/+bug/752542
and ask at the bbswitch issue list if someone can take a look at it given all your problems.

Thanks for the suggestion - I never saw that bug before. I’ll upload the tables ASAP.

Uploaded the tables to the launchpad bug report.

I reverted my xserver and kernel and am back to working on the external monitor via the 364 drivers. I guess what I should try to do for now is try to figure out why only the external monitor works in this configuration - since it looks like I’m stuck with it.

Well, I got 364 to see both monitors again, but I had to switch the BIOS back to DISCRETE mode to do it.

In MSHYBRID mode, I set nogpumanager in the kernel parameters and used “install i915 /bin/false” in a modprobe blacklist (and then rebuilt the initramfs for the 4.4 kernel) to prevent the i915 module from loading. I used lsmod on subsequent reboots and confirmed it was not loading.

I then set up an XOrg.conf with two screens (and only the nvidia driver) defined. However, the XOrg log never showed the laptop screen and nvidia-settings was unable to detect it. Only the external monitor was seen.

After flipping the BIOS to DISCRETE, the internal screen (and external) was recognized and working immediately with no configuration:

[    47.179] (--) NVIDIA(0): Valid display device(s) on GPU-0 at PCI:1:0:0
[    47.179] (--) NVIDIA(0):     DFP-0
[    47.179] (--) NVIDIA(0):     DFP-1 (boot)
[    47.179] (--) NVIDIA(0):     DFP-2
[    47.179] (--) NVIDIA(0):     DFP-3
[    47.179] (--) NVIDIA(0):     DFP-4
[    47.179] (--) NVIDIA(0):     DFP-5
[    47.209] (--) NVIDIA(0): Ancor Communications Inc ASUS MX299 (DFP-0): connected
[    47.210] (--) NVIDIA(0): Ancor Communications Inc ASUS MX299 (DFP-0): Internal TMDS
[    47.210] (--) NVIDIA(0): Ancor Communications Inc ASUS MX299 (DFP-0): 600.0 MHz maximum pixel clock
[    47.210] (--) NVIDIA(0): 
[    47.210] (--) NVIDIA(0): SDC (DFP-1): connected
[    47.210] (--) NVIDIA(0): SDC (DFP-1): Internal DisplayPort
[    47.210] (--) NVIDIA(0): SDC (DFP-1): 960.0 MHz maximum pixel clock
[    47.210] (--) NVIDIA(0): 
[    47.210] (--) NVIDIA(0): DFP-2: disconnected
[    47.210] (--) NVIDIA(0): DFP-2: Internal DisplayPort
[    47.210] (--) NVIDIA(0): DFP-2: 960.0 MHz maximum pixel clock
[    47.210] (--) NVIDIA(0): 
[    47.210] (--) NVIDIA(0): DFP-3: disconnected
[    47.210] (--) NVIDIA(0): DFP-3: Internal TMDS
[    47.210] (--) NVIDIA(0): DFP-3: 330.0 MHz maximum pixel clock
[    47.210] (--) NVIDIA(0): 
[    47.210] (--) NVIDIA(0): DFP-4: disconnected
[    47.210] (--) NVIDIA(0): DFP-4: Internal DisplayPort
[    47.210] (--) NVIDIA(0): DFP-4: 960.0 MHz maximum pixel clock
[    47.210] (--) NVIDIA(0): 
[    47.210] (--) NVIDIA(0): DFP-5: disconnected
[    47.210] (--) NVIDIA(0): DFP-5: Internal TMDS
[    47.210] (--) NVIDIA(0): DFP-5: 330.0 MHz maximum pixel clock

I’m putting this all here half as a note to myself, because I may need the info to revert again after I test nVidia’s next driver. :)