Scaling Displays on NVIDIA Prime Results In BadValue Error

I recently got a new computer with a GTX 960M (10de:139b, GM107M) with NVIDIA Optimus.

The laptop itself has a 4K screen, with an external 1080p display. Usually this won’t be an issue, but Ubuntu is not that great at HiDPI displays, especially with a non-HiDPI display as a secondary.

In short, the easiest solution to my problem would be to use XRandR
's scaling functionality to increase the virtual size of my external display, like so:

xrandr --output HDMI-1-1 --scale 2x2

The expected result is of course for my external scale to have double scaling, but instead XRandR outputs the below errors:

X Error of failed request:  BadValue (integer parameter out of range for operation)
  Major opcode of failed request:  140 (RANDR)
  Minor opcode of failed request:  26 (RRSetCrtcTransform)
  Value in failed request:  0x40
  Serial number of failed request:  38
  Current serial number in output stream:  39

I am running NVIDIA’s driver (version 378.09) with PRIME enabled, obviously. Output of

nvidia-smi

is as follows:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 378.09                 Driver Version: 378.09                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 960M    Off  | 0000:01:00.0     Off |                  N/A |
| N/A   45C    P0    N/A /  N/A |   1183MiB /  2002MiB |      5%      Default |
+-------------------------------+----------------------+----------------------+

XRandR’s binary is reporting itself as version 1.5.0, while the server reports RandR version 1.5. The X server is version 1.18.4, running on kernel 4.4.0-62-generic. UEFI is enabled, but SecureBoot is not.

Additional/same information is over on a question I asked on Ask Ubuntu regarding the same thing.

What can I do to resolve this issue, or this a problem with the NVIDIA drivers/PRIME that is out of my control?

Bug report is not uploaded due to it consisting sensitive information (sorry), but will be available to NVIDIA staff upon request.

I’m bumping this. xrandr works perfectly fine on intel graphics, but whenever you switch to NVIDIA and try to scale you get a BadValue. I’ve tried everything at this point and it seems NVIDIA drivers are at fault (I’m using 375, because 378 on my laptop has more weird issues, but nevermind that…)

It’s extremely frustrating to not get an answer regarding this from NVIDIA, what the hell? You disregard the people trying to use Linux as their main development platform for CUDA?

I don’t think PRIME supports transforms. I’ll have to step through the server to see exactly where it’s rejecting the request, but the interface it uses to tell the driver what to display supports rotation but not transforms.

Hi pacificfils and overmorrow, Can you attach nvidia bug report?

>>I’ve tried everything at this point and it seems NVIDIA drivers are at fault
Any reason, error do you think it nvidia issue? What testing you done?

What desktop environment you are running - Unity, GNOME, GNOME-SHELL, KDE or else? What are the different ways and command you guys are using for Scaling?

Hi @sandipt

I’m experience the same problem that @pacificfils reported, What I’m trying to achieve is to have two different DPI configurations for a dual monitor. I’m running the following commands using XrandR:

xrandr --output eDP-1-1 --primary --mode 3840x2160 --scale 1x1 --pos 0x0 --rotate normal --output HDMI-1-1 --mode 1920x1080 --scale 2x2 --pos 3840x0 --rotate normal

This are my desktop environment details

KDE Plasma Version: 5.8.7
KDE Framework Version: 5.35.0
Qt Version: 5.6.1
Kernel Version: 4.4.0-79-generic
OS Type: 64-bit

Do you known if NVIDIA drivers will support transform in the future? Or this is a bug.

Here’s some additional info, that might be useful for you.

nvidia-smi

+-----------------------------------------------------------------------------+                                                                                                   
| NVIDIA-SMI 375.66                 Driver Version: 375.66                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 960M    Off  | 0000:02:00.0     Off |                  N/A |
| N/A   49C    P0    N/A /  N/A |   1565MiB /  4044MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0      1216    G   /usr/lib/xorg/Xorg                             856MiB |
|    0      2137    G   kwin_x11                                       121MiB |
|    0      2140    G   /usr/bin/krunner                                12MiB |
|    0      2142    G   /usr/bin/plasmashell                           160MiB |
|    0      2943    G   ...el-token=179042A5B38D2338B87533F3841EC681   335MiB |
|    0      5644    G   ...s-passed-by-fd --v8-snapshot-passed-by-fd    78MiB |

xorg.conf

Section "ServerLayout"
    Identifier "layout"
    Screen 0 "nvidia"
    Inactive "intel"
EndSection

Section "Device"
    Identifier "intel"
    Driver "modesetting"
    BusID "PCI:0@0:2:0"
    Option "AccelMethod" "None"
EndSection

Section "Screen"
    Identifier "intel"
    Device "intel"
EndSection

Section "Device"
    Identifier "nvidia"
    Driver "nvidia"
    BusID "PCI:2@0:0:0"
    Option "ConstrainCursor" "off"
EndSection

Section "Screen"
    Identifier "nvidia"
    Device "nvidia"
    Option "AllowEmptyInitialConfiguration" "on"
    Option "IgnoreDisplayDevices" "CRT"
EndSection

Feel free to require any additional info you need.

Thank you.

It’s not a driver limitation, it’s a limitation in the way the X server coordinates display between GPUs using PRIME.

If transformations are added to the driver interface in a future version of the X server, then we can evaluate supporting them in the driver.