cursor will cross from monitor on the left (hdmi<->dGPU) to the monitor on the right (hdmi<->iGPU) at pretty much the right position. but, there is catch. if i continue moving the cursor to the right and come back then it’s all out of sync and split position is somewhere in a middle of left monitor.
nvidia-settings is now a separate package in gentoo (although behind nvidia-drivers). so i’m running 375.10 now. still mirror… but something nice happened; kmscon started working with nvidia blobs! it didn’t before possibly due to nv not using dump buffers:
Oct 28 13:18:52 zorro kmscon[411]: [0000.222379] ERROR: video_drm2d: driver does not support dumb buffers (video_init() in src/uterm_drm2d_video.c:335)
i can just not use the iGPU altogether now but would be nice to see twinview working in PRIME sync.
Gnarl, very funny.
375.10 crashing badly:
[ 15.278] (WW) NVIDIA: This driver was compiled against the X.Org server SDK from git commit 25e4f9ee68b99c2810efdb6cd8c56affa45e1fea and may not be compatible with the final version of this SDK.
Obviously, it’s built against another commit.
Edit: using the xserver at commit 25e4f9e 375.10 works. agoins, I think you know all the names;)
It looks like something I’d expect to see if the ABI was mismatched. Which commit did you use to build Xorg? Try 25e4f9e. 375.10 beta was released before the ABI was frozen, so it’s a bit behind ToT. It should stop being a problem soon, now that the ABI has been frozen.
it doesn’t segfaults anymore, anyway I’m not building from any source, I’m using arch packages (they just added to the repo the 375.10 driver)
at least now I get on dmesg:
[drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
[drm] No driver support for vblank timestamp query.
nvidia-modeset: Allocated GPU:0 (GPU-11f5c854-549f-e316-402b-12d0ed756f2a) @ PCI:0000:01:00.0
nvidia-modeset: Freed GPU:0 (GPU-11f5c854-549f-e316-402b-12d0ed756f2a) @ PCI:0000:01:00.0
But I’m still not able to vsync, glxspheres64 goes at 4k frame/sec.
I don’t know if that “freed” is good.
Anyway I’m at your disposal to build stuff and test things in order to make it work :D.
Please run ‘nvidia-bug-report.sh’ and attach the resulting ‘nvidia-bug-report.log.gz’, so I can get more information about your system.
Using an Arch package for the X server likely isn’t going to serve you right now. Getting PRIME Synchronization not only requires a driver that supports it, but an X server that supports it as well. The ABI for Xorg 1.19 (ABI 23) was only recently frozen, so 375.10 beta does not have support for it. It requires a Git build of Xorg from commit 25e4f9e, because the ABI has changed between then and now. If the ABI does not match between the server and the driver, you can get strange crashing like the above because the offsets of structure members will be wrong and the driver and server will call into each other incorrectly.
ABI 23 was recently frozen (no more changes), so the next driver release should support Xorg 1.19 RC2 and later. It will be much easier to use packages then, as you won’t need a server built from an arbitrary commit. If you aren’t in a hurry, you might save some headaches by waiting a short while.
If you are using a packaged version of Xorg, it’s likely ABI 20, Xorg 1.18. 375.10 will run against this just fine because we build in support for all the frozen ABIs, but it will not support PRIME Sync. For that, you need ABI 23.
It’s strange that adding “IgnoreABI” fixed your segfaulting issue. If you were running against ABI 23 and lacked “IgnoreABI”, I would expect the server to just bail out due to the ABI not being officially supported. If anything, adding “IgnoreABI” should result in segfaults if you have a mismatched ABI. Hopefully the bug report will clear things up.
Hi, I’m using Fedora 25 with Xorg 1.19 and NVIDIA 375.20 driver from negativo17’s repo. My laptop is Optimus with a Skylake 520 iGPU and GTX 960M dGPU.
I cannot seem to get either Wayland (login uses X, can’t use Gnome Wayland session) nor PRIME Synchronization (screen tearing happens still with ForceFullCompositionPipeline) working. I believe I need to enable KMS, but if I add nvidia-drm.modeset=1 to my kernel options, I no longer boot to GUI. Can KMS work with Optimus?
My configuration can be found here. I install the driver, generate the kernel module, create autostart scripts for xinit and GDM to execute the xrandr setprovideroutputsource command, and create a xorg.conf that contains AllowEmptyInitialConfiguration and has my NV GPU BusID.