PRIME and PRIME Synchronization

update: rearranged config sections for better reading:

Section "InputDevice"
    # generated from data in "/etc/conf.d/gpm"
    Identifier     "Mouse0"
    Driver         "mouse"
    Option         "Protocol"
    Option         "Device" "/dev/input/mice"
    Option         "Emulate3Buttons" "no"
    Option         "ZAxisMapping" "4 5"
EndSection
Section "InputDevice"
    # generated from default
    Identifier     "Keyboard0"
    Driver         "kbd"
EndSection

Section "Device"
    Identifier     "nvidia"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BusID "PCI:1:0:0"
EndSection
Section "Device"
    Identifier     "intel"
    Driver         "modesetting"
EndSection


Section "Monitor"
    # HorizSync source: edid, VertRefresh source: edid
    Identifier     "Monitor0"
    VendorName     "Unknown"
    ModelName      "Samsung SMBX2335"
    HorizSync       30.0 - 81.0
    VertRefresh     56.0 - 75.0
    Option         "DPMS"
EndSection
Section "Monitor"
    # HorizSync source: edid, VertRefresh source: edid
    Identifier     "Monitor1"
    VendorName     "Unknown"
    ModelName      "Samsung SMBX2335"
    HorizSync       30.0 - 81.0
    VertRefresh     56.0 - 75.0
    Option         "DPMS"
EndSection


Section "Screen"
    Identifier     "nvidia"
    Monitor        "Monitor0"
    Device         "nvidia"
    DefaultDepth    24
    Option "AllowEmptyInitialConfiguration"
    Option "TwinView" "true"
EndSection
Section "Screen"
    Identifier     "intel"
    Monitor        "Monitor1"
    Device         "intel"
    DefaultDepth    24
    Option "TwinView" "true"
    SubSection     "Display"
        Depth       24
    EndSubSection
EndSection

Section "ServerLayout"
    Identifier     "layout"
    Screen      0 "nvidia"
    Screen      1 "intel" rightOf "nvidia"
    Inactive "intel"
EndSection

seems like this is not implemented: if i put this in my .xinitrc:

xrandr --output HDMI-0 --auto --primary --output HDMI-2 --auto --left-of HDMI-0 --pos $(( 2**17 ))x0

cursor will cross from monitor on the left (hdmi<->dGPU) to the monitor on the right (hdmi<->iGPU) at pretty much the right position. but, there is catch. if i continue moving the cursor to the right and come back then it’s all out of sync and split position is somewhere in a middle of left monitor.

Hi jrun,

What driver are you running? 375.10 beta fixes some issues when dGPU and iGPU displays are being mixed.

If possible, please post your nvidia-bug-report.log.gz generated by running ‘nvidia-bug-report.sh’.

Thanks,

i’m running 370.28 on gentoo

bug report:
https://gist.github.com/52e8a7b9b1272f8997ea8b434b4e735b

Please try 375.10 Beta: https://devtalk.nvidia.com/default/topic/972585

fails to compile!

libXNVCtrlAttributes/NvCtrlAttributesNvml.c:36:18: fatal error: nvml.h: No such file or directory
 #include "nvml.h"
                  ^

is this related:
https://github.com/NVIDIA/nvidia-settings/issues/4

jrun,

If you’re trying to build nvidia-settings from source for some reason, then yes, you need commit 168e17f from GitHub.

nvidia-settings is now a separate package in gentoo (although behind nvidia-drivers). so i’m running 375.10 now. still mirror… but something nice happened; kmscon started working with nvidia blobs! it didn’t before possibly due to nv not using dump buffers:

Oct 28 13:18:52 zorro kmscon[411]: [0000.222379] ERROR: video_drm2d: driver does not support dumb buffers (video_init() in src/uterm_drm2d_video.c:335)

i can just not use the iGPU altogether now but would be nice to see twinview working in PRIME sync.

UPDATE: it all works with this one-liner:

xrandr --output HDMI-0 --auto --primary --output HDMI-2 --auto --right-of HDMI-0

just not with my config file.

hmm, not working smoothly as i had hopped. i keep losing control of the keyboard. here is kprint dump:
https://gist.github.com/257/1a4fccf55574f7c2c335f838b2c879de

Gnarl, very funny.
375.10 crashing badly:
[ 15.278] (WW) NVIDIA: This driver was compiled against the X.Org server SDK from git commit 25e4f9ee68b99c2810efdb6cd8c56affa45e1fea and may not be compatible with the final version of this SDK.

Obviously, it’s built against another commit.

Edit: using the xserver at commit 25e4f9e 375.10 works. agoins, I think you know all the names;)

Ah, thanks for catching that. The ABI commit must have gotten updated without updating the manual. Sorry for the inconvenience.

The ABI has been declared frozen, so this should stop being an issue in the next release.

Thanks,

Hello,
I’m testing the modesetting=1 (using arch linux on optimus laptop with GeForce GTX 970M)

xorg gives me:

[  1043.199] (EE) Backtrace:
[  1043.200] (EE) 0: /usr/lib/xorg-server/Xorg (OsLookupColor+0x139) [0x59cd49]
[  1043.200] (EE) 1: /usr/lib/libc.so.6 (__restore_rt+0x0) [0x7f8bec9bc0af]
[  1043.201] (EE) 2: /usr/lib/xorg/modules/drivers/nvidia_drv.so (nvidiaAddDrawableHandler+0x33f0e) [0x7f8be6bf04fe]
[  1043.201] (EE) unw_get_proc_name failed: no unwind info found [-10]
[  1043.201] (EE) 3: /usr/lib/xorg/modules/drivers/nvidia_drv.so (?+0x33f0e) [0x7f8be6b9096e]
[  1043.201] (EE) 4: /usr/lib/xorg/modules/drivers/nvidia_drv.so (nvidiaAddDrawableHandler+0xab95) [0x7f8be6b9dd45]
[  1043.201] (EE) 5: /usr/lib/libnvidia-glcore.so.375.10 (nvidiaAddDrawableHandler+0x51cb2e) [0x7f8be75c1d4c]
[  1043.201] (EE) 
[  1043.201] (EE) Segmentation fault at address 0x28
[  1043.201] (EE) 
Fatal server error:
[  1043.201] (EE) Caught signal 11 (Segmentation fault). Server aborting
[  1043.201] (EE) 
[  1043.201] (EE)

Any idea?

It looks like something I’d expect to see if the ABI was mismatched. Which commit did you use to build Xorg? Try 25e4f9e. 375.10 beta was released before the ABI was frozen, so it’s a bit behind ToT. It should stop being a problem soon, now that the ABI has been frozen.

Ok, it seems that by setting:

Section “ServerFlags”
    Option “IgnoreABI” “1”
EndSection

it doesn’t segfaults anymore, anyway I’m not building from any source, I’m using arch packages (they just added to the repo the 375.10 driver)

at least now I get on dmesg:

[drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
[drm] No driver support for vblank timestamp query.
nvidia-modeset: Allocated GPU:0 (GPU-11f5c854-549f-e316-402b-12d0ed756f2a) @ PCI:0000:01:00.0
nvidia-modeset: Freed GPU:0 (GPU-11f5c854-549f-e316-402b-12d0ed756f2a) @ PCI:0000:01:00.0

But I’m still not able to vsync, glxspheres64 goes at 4k frame/sec.
I don’t know if that “freed” is good.
Anyway I’m at your disposal to build stuff and test things in order to make it work :D.

Please run ‘nvidia-bug-report.sh’ and attach the resulting ‘nvidia-bug-report.log.gz’, so I can get more information about your system.

Using an Arch package for the X server likely isn’t going to serve you right now. Getting PRIME Synchronization not only requires a driver that supports it, but an X server that supports it as well. The ABI for Xorg 1.19 (ABI 23) was only recently frozen, so 375.10 beta does not have support for it. It requires a Git build of Xorg from commit 25e4f9e, because the ABI has changed between then and now. If the ABI does not match between the server and the driver, you can get strange crashing like the above because the offsets of structure members will be wrong and the driver and server will call into each other incorrectly.

ABI 23 was recently frozen (no more changes), so the next driver release should support Xorg 1.19 RC2 and later. It will be much easier to use packages then, as you won’t need a server built from an arbitrary commit. If you aren’t in a hurry, you might save some headaches by waiting a short while.

If you are using a packaged version of Xorg, it’s likely ABI 20, Xorg 1.18. 375.10 will run against this just fine because we build in support for all the frozen ABIs, but it will not support PRIME Sync. For that, you need ABI 23.

It’s strange that adding “IgnoreABI” fixed your segfaulting issue. If you were running against ABI 23 and lacked “IgnoreABI”, I would expect the server to just bail out due to the ABI not being officially supported. If anything, adding “IgnoreABI” should result in segfaults if you have a mismatched ABI. Hopefully the bug report will clear things up.

Thanks,

Thank, this explains many things,

anyway here you have the report

I will compile xorg from github from the commit you pointed to.
nvidia-bug-report.log.gz (257 KB)

So, since wayland works with kms should it work out of the box?

Hi, I’m using Fedora 25 with Xorg 1.19 and NVIDIA 375.20 driver from negativo17’s repo. My laptop is Optimus with a Skylake 520 iGPU and GTX 960M dGPU.

I cannot seem to get either Wayland (login uses X, can’t use Gnome Wayland session) nor PRIME Synchronization (screen tearing happens still with ForceFullCompositionPipeline) working. I believe I need to enable KMS, but if I add nvidia-drm.modeset=1 to my kernel options, I no longer boot to GUI. Can KMS work with Optimus?

My configuration can be found here. I install the driver, generate the kernel module, create autostart scripts for xinit and GDM to execute the xrandr setprovideroutputsource command, and create a xorg.conf that contains AllowEmptyInitialConfiguration and has my NV GPU BusID.

@Espionage724

Gnome wayland isn’t going to work till mutter has EGLStreams support.

https://bugzilla.gnome.org/show_bug.cgi?id=773629