[355.11 / 358.09] ViewPortIn on 2nd monitor results in garbled display for values over 3211x1800

I’m running two monitors in a mixed DPI layout with a GTX 980 Ti as pictured below:

Primary           2nd
+++++++++++++ +++++++++++++
+    DP     + + DVI/HDMI  +
+ 3840x2160 + + 1920x1080 +
+           + +           +
+++++++++++++ +++++++++++++

I would like to run the second monitor with a ViewPortIn/Panning setting of 3840x2160. However by experimentation I’ve determined the largest ViewPortIn I can use is 3211x1800, increasing either x or y results in a mostly black 2nd monitor with a little band of garbled color along the top. The primary monitor works properly in all cases, even when the 2nd is garbled.

This setting will work:

NVIDIA(0): Setting mode "DPY-7:nvidia-auto-select+0+0,DPY-1:nvidia-auto-select+3840+0{viewportin=3211x1800}"

These settings will NOT work:

NVIDIA(0): Setting mode "DPY-7:nvidia-auto-select+0+0,DPY-1:nvidia-auto-select+3840+0{viewportin=3212x1800}"
NVIDIA(0): Setting mode "DPY-7:nvidia-auto-select+0+0,DPY-1:nvidia-auto-select+3840+0{viewportin=3200x1801}"

I suspect there is a buffer somewhere that is limiting the virtual size, however xrandr tells me:

Screen 0: minimum 8 x 8, current 7051 x 2160, maximum 16384 x 16384

I’m running an up to date Antergos (Arch) install and I’ve tried both the 355.11 and 358.09 drivers. Any suggestions on how to get ViewPortIn=3840x2160 to work would be much appreciated.

nvidia-bug-report.log.gz (116 KB)

I see the same issue here, on a GTX960 driving a 4K display and a second monitor at 1920x1200. 352 and 358 drivers.

ViewPortIn of 3840x2400 (on the 1920x1200 monitor) results in a black screen with a blurred line at the top. However, if I set ViewPortIn to 3840x2401 then it works. Likewise, a ViewPortOut of 1920x1199 works. So I think there’s a bug with an exact ratio of 2 in the Y axis of the transform.

With any of the Invert settings it works fine, so if I could mount the monitor upside down, or in a mirror, it would work. Heh.

Try setting ViewPortIn to 3840x2161 as a workaround for now.

I’m seeing the same problem here on 364.12 beta drivers with a GeForce GTX 970M driving an internal display at 3840x2160 and external monitor at 1920x1080.

If I set ViewPortIn=3840x2160 and ViewPortOut=1920x1080 for the external display, I get a flickering line at the top of the external display. As soon as I go to 3840x2161, things display normally.

It seems like a bug, most likely.

Does this still reproduce if you change the drop-down in the PowerMizer page of nvidia-settings from Auto to “Prefer Maximum Performance”? If not, can you please also try adding ForceFullCompositionPipeline=On to your metamode line?

I have a very similar problem with a Geforce GTX Titan driver version 361.28, two monitors and the following mode which I set with

nvidia-settings --assign CurrentMetaMode=“HDMI-0: nvidia-auto-select +3840+360 {viewportin=2880x1800}, DP-1: nvidia-auto-select +0+0”

The error did not occur with older driver versions.

Yes, the issue still occurs with Prefer Maximum Performance

The problem goes away if I add ForceFullCompositionPipeline=On! Thanks

NVIDIA Driver Version: 352.63

Tracking this issue under bug 200193245

I am using Ubuntu 16.04 64bit with the latest Nvidia 364.19 driver in a hybrid GPU laptop with a GTX960M and integrated Intel. My laptop screen is 4k HD at 3840x2160. When using the laptop screen only both Nvidia and Intel GPU’s work – I have to set the Hi-DPI window setting in gnome-settings using the Tweak Tool application and add the 2x scaling in the System Settings > Display > Menu and Title Scaling in order to have a “perfect” 4k HD display.

When I attach an external monitor of 1920x1080, I need to produce a 3840x2160 “x-screen” and scale it to fit the 1920x1080 monitor. If I don’t, application windows and fonts are too big on the 1920x1080 screen even though the menu and title bars are good.

Using the Intel GPU I can simply apply in terminal this:

xrandr --fb 7680x2400 --output eDP1 --mode 3840x2160 --output HDMI1 --mode 1920x1200 --panning 3840x2400+3840+0 --scale 2x2

…and everything works perfectly. All scaling works properly across the two screens, 4k + HD.

Using the Nvidia GPU I am not able to get it to work.

cad@cad-VN7:~⟫ xrandr --fb 7680x2160 --output eDP-1 --mode 3840x2160 --output HDMI-1 --mode 1920x1080 --panning 3840x2160+3840+0 --scale 2x2
X Error of failed request: BadValue (integer parameter out of range for operation)
Major opcode of failed request: 140 (RANDR)
Minor opcode of failed request: 26 (RRSetCrtcTransform)
Value in failed request: 0x40
Serial number of failed request: 35
Current serial number in output stream: 36

…trying the suggestion mentioned in this ,thread:

nvidia-settings --assign CurrentMetaMode=“ForceFullCompositionPipeline=On, eDP-1: 3840x2160+0+0, HDMI-1: 1920x1080+3840+0 { ViewPortIn=3840x2160 }”

…results in no errors in terminal, but when looking into the xorg.0.log

[ 6461.947] (WW) NVIDIA(0): No valid modes for
[ 6461.947] (WW) NVIDIA(0): “ForceFullCompositionPipeline=On,eDP-1:3840x2160+0+0,HDMI-1:1920x1080+3840+0{ViewPortIn=3840x2160}”;
[ 6461.947] (WW) NVIDIA(0): removing.

As an additional note, the Ubuntu gpu-manager scripts write over the xorg.conf file if I try adding metamodes directly to the configuration file. So I am looking for a terminal command that will work.

Could you provide example lines for nvidia-settings? I have the nvidia-settings manual, but it is not much help.

It would be great if the Nvidia Settings GUI application would allow you to set up monitors in my type of setup – by I cannot get the GUI application to do anything “useful.”

Thank you.

It’s a property that gets applied inside the { } brackets at the end of the mode. So yours I think would be

eDP-1:3840x2160+0+0, HDMI-1:1920x1080+3840+0 {ViewPortIn=3840x2160, ForceFullCompositionPipeline=On}

Thank you for the example – however, I noticed that it was supposed to go inside the brackets after looking over the manual (again, but after I posted) and I tried it yesterday – but it yields the same error in the Xorg.0.log.

I’ve tried several approaches but they all produce this:

[ 2508.048] (WW) NVIDIA(0): No valid modes for
[ 2508.048] (WW) NVIDIA(0): “eDP-1:3840x2160+0+0,HDMI-1:1920x1080+3840+0{ViewPortIn=3840x2160,ForceFullCompositionPipeline=On}”;
[ 2508.048] (WW) NVIDIA(0): removing.

Some attempts have used these variations:

nvidia-settings --assign CurrentMetaMode=“eDP-1: 3840x2160+0+0, HDMI-1: 1920x1080+3840+0 { ViewPortIn=3840x2160, ForceFullCompositionPipeline=On }”

nvidia-settings --assign=CurrentMetaMode=“eDP-1: 3840x2160+0+0, HDMI-1: 1920x1080+3840+0 { ViewPortIn=3840x2160, ForceFullCompositionPipeline=On }”

nvidia-settings -a CurrentMetaMode=“eDP-1: 3840x2160+0+0, HDMI-1: 1920x1080+3840+0 { ViewPortIn=3840x2160, ForceFullCompositionPipeline=On }”

Ran with and without various permissions prefaces, such as:

sudo -H

I think I’ve encountered the same problem. I have a Dell Precision 15 7510 laptop. It has a Quadro M2000M GPU. The internal display is 4K so when I attach an external monitor I want to adjust the scale. But no matter whether I use nvidia-settings or xrandr, right after I set the scale the external monitor went blank, sometimes with a few lines showing at the top of screen. I’ve tested two external monitors with different resolutions and sizes using DP → VGA dongle and HDMI respectively. Both had the same problem.

Setting ViewPortIn to “3840x2401” could workaround this problem. Also ForceFullCompositionPipeline=On could workaround it too.

I’m running CentOS 7.2 with kernel-ml (4.8.6; need this for Skylake support). nVIDIA driver 370.28.
nvidia-bug-report.log.gz (213 KB)

Thanks, YanLi. It sounds like there’s not enough internal bandwidth in the GPU to do the scaling you requested. The driver is supposed to automatically enable the full composition pipeline when you request something that the display engine can’t do, but it sounds like that’s not working properly. I’ll see if I can reproduce this problem.

Cool. Thanks, Aaron! I think you should be able to reproduce it using the following settings:

  1. A 4K internal display (or other HiDPI display)
  2. A low DPI external display
  3. Go to Advanced settings of the Display Configuration in nvidia-settings, double the resolutions of ViewPortIn and Panning of the external display (in my case, ViewPortIn=3840x2400, ViewPortOut=1920x1200+0+0, Panning=3840x2400)

and you should be able to see the external display goes blank.

I’m currently trying to write a script to automate the steps I need to do after attaching an external display. I can’t figure out how to enable an external display and then enable panning from the nvidia-settings command line. Maybe I should use xrandr first then use nvidia-settings to enable ForceFullCompositionPipeline=On? Or maybe I should try to make nvidia-settings --load-config-only work?

Panning is specified with an @ sign in the MetaMode syntax, although you shouldn’t need to specify it at all if you just want it to not pan. See Chapter 12. Configuring Multiple Display Devices on One X Screen