5.4 kernel breaks prime synchronization.

on every version of the 5.4 kernel i’ve tested, prime sync can not be enabled.

xrandr output shows 0, it’s also visibly evident it’s not working with bad tearing

PRIME Synchronization: 0

5.4 kernels i’ve tested

arch: linux (5.4), and linux-zen (5.4)
manjaro: 5.4 since the first rc release, same result.

https://bbs.archlinux.org/viewtopic.php?id=251032
https://forum.manjaro.org/t/issues-with-nvidia-kernel-module-after-kernel-driver-updates-switch-drivers/112471/9?u=dglt

Thanks for reporting this. For reference, we’re tracking it in internal bug number 2780092.

just tested linux 5.5rc1 and there’s no prime sync there either…so seems the linux kernel devs think prime offload mode is a better implementation and decided to stick to it into kernel 5.5 as well. at least that’s what i’ve heard.

i’ve heard the same but so far it’s only rumors as far i can tell. to lose prime sync in favor of the feature castrated render offload would be a shame.

thanks for the reply, good to know it’s being worked on.

Same issue here. nvidia package version: 440.36-1

Output of uname -a: Linux ArchLinux 5.4.2-arch1-1 #1 SMP PREEMPT Thu, 05 Dec 2019 12:29:40 +0000 x86_64 GNU/Linux

Thanks for working on this.

Hey, any news regarding this issue?

Note: This is the initial draft of the patch, for a more “compatible” version: https://gitlab.com/snippets/1929174

Now, I’m probably missing something with this one, but isn’t this primarily an issue of kernel renaming “reservation_object*” to “dma_resv*”, and the removal of the (mainly for internal use) gem_prime_res_obj callback?
If that’s the case, then a relatively quick and dirty way of fixing this, would be to patch the driver like so:

diff --git a/kernel/conftest.sh b/kernel/conftest.sh
index c9c2db3..a10463d 100755
--- a/kernel/conftest.sh
+++ b/kernel/conftest.sh
@@ -130,6 +130,7 @@ test_headers() {
     FILES="$FILES linux/sched/signal.h"
     FILES="$FILES linux/sched/task.h"
     FILES="$FILES linux/sched/task_stack.h"
+    FILES="$FILES linux/reservation.h"
     FILES="$FILES xen/ioemu.h"
     FILES="$FILES linux/fence.h"
     FILES="$FILES soc/tegra/chip-id.h"
@@ -2063,7 +2064,7 @@ compile_test() {
             CODE="
             #include <drm/drmP.h>
             int conftest_drm_driver_has_gem_prime_res_obj(void) {
-                return offsetof(struct drm_driver, gem_prime_res_obj);
+                //return offsetof(struct drm_driver, gem_prime_res_obj);
             }"
 
             compile_check_conftest "$CODE" "NV_DRM_DRIVER_HAS_GEM_PRIME_RES_OBJ" "" "types"
diff --git a/kernel/nvidia-drm/nvidia-dma-fence-helper.h b/kernel/nvidia-drm/nvidia-dma-fence-helper.h
index 0aa5a4f..f289636 100644
--- a/kernel/nvidia-drm/nvidia-dma-fence-helper.h
+++ b/kernel/nvidia-drm/nvidia-dma-fence-helper.h
@@ -40,7 +40,9 @@
 #include <linux/dma-fence.h>
 #endif
 
+#if defined(NV_LINUX_RESERVATION_H_PRESENT)
 #include <linux/reservation.h>
+#endif
 
 #if defined(NV_LINUX_FENCE_H_PRESENT)
 typedef struct fence nv_dma_fence_t;
diff --git a/kernel/nvidia-drm/nvidia-drm-drv.c b/kernel/nvidia-drm/nvidia-drm-drv.c
index a66d3cc..b79330a 100644
--- a/kernel/nvidia-drm/nvidia-drm-drv.c
+++ b/kernel/nvidia-drm/nvidia-drm-drv.c
@@ -681,7 +681,7 @@ static struct drm_driver nv_drm_driver = {
     .gem_prime_vunmap       = nv_drm_gem_prime_vunmap,
 
 #if defined(NV_DRM_DRIVER_HAS_GEM_PRIME_RES_OBJ)
-    .gem_prime_res_obj      = nv_drm_gem_prime_res_obj,
+    //.gem_prime_res_obj      = nv_drm_gem_prime_res_obj,
 #endif
 
 #if defined(NV_DRM_DRIVER_HAS_SET_BUSID)
diff --git a/kernel/nvidia-drm/nvidia-drm-gem.c b/kernel/nvidia-drm/nvidia-drm-gem.c
index 7201ade..d58f4f0 100644
--- a/kernel/nvidia-drm/nvidia-drm-gem.c
+++ b/kernel/nvidia-drm/nvidia-drm-gem.c
@@ -46,7 +46,7 @@ void nv_drm_gem_free(struct drm_gem_object *gem)
     drm_gem_object_release(&nv_gem->base);
 
 #if defined(NV_DRM_DRIVER_HAS_GEM_PRIME_RES_OBJ)
-    reservation_object_fini(&nv_gem->resv);
+    dma_resv_fini(&nv_gem->resv);
 #endif
 
     nv_gem->ops->free(nv_gem);
@@ -113,12 +113,14 @@ void nv_drm_gem_prime_vunmap(struct drm_gem_object *gem, void *address)
 }
 
 #if defined(NV_DRM_DRIVER_HAS_GEM_PRIME_RES_OBJ)
+/*
 struct reservation_object* nv_drm_gem_prime_res_obj(struct drm_gem_object *obj)
 {
     struct nv_drm_gem_object *nv_gem = to_nv_gem_object(obj);
 
     return &nv_gem->resv;
 }
+*/
 #endif
 
 #endif /* NV_DRM_AVAILABLE */
diff --git a/kernel/nvidia-drm/nvidia-drm-gem.h b/kernel/nvidia-drm/nvidia-drm-gem.h
index b621969..e671795 100644
--- a/kernel/nvidia-drm/nvidia-drm-gem.h
+++ b/kernel/nvidia-drm/nvidia-drm-gem.h
@@ -56,7 +56,7 @@ struct nv_drm_gem_object {
     const struct nv_drm_gem_object_funcs *ops;
 
 #if defined(NV_DRM_DRIVER_HAS_GEM_PRIME_RES_OBJ)
-    struct reservation_object resv;
+    struct dma_resv resv;
 #endif
 };
 
@@ -127,7 +127,7 @@ void nv_drm_gem_object_init(struct nv_drm_device *nv_dev,
     drm_gem_private_object_init(dev, &nv_gem->base, size);
 
 #if defined(NV_DRM_DRIVER_HAS_GEM_PRIME_RES_OBJ)
-    reservation_object_init(&nv_gem->resv);
+    dma_resv_init(&nv_gem->resv);
 #endif
 }
 
diff --git a/kernel/nvidia-drm/nvidia-drm-prime-fence.c b/kernel/nvidia-drm/nvidia-drm-prime-fence.c
index 1f10940..5114965 100644
--- a/kernel/nvidia-drm/nvidia-drm-prime-fence.c
+++ b/kernel/nvidia-drm/nvidia-drm-prime-fence.c
@@ -518,7 +518,7 @@ int nv_drm_gem_fence_attach_ioctl(struct drm_device *dev,
         goto fence_context_create_fence_failed;
     }
 
-    reservation_object_add_excl_fence(&nv_gem->resv, fence);
+    dma_resv_add_excl_fence(&nv_gem->resv, fence);
 
     ret = 0;

Note that this is for version 440.44, and that I don’t have a way of testing this myself. So, this is completely untested. Use on your own risk.
Alternative link to downloadable patch: https://gitlab.com/snippets/1927096

Woah, first of all, thanks for the quick response, I really appreciate it. :)

After downloading the driver, applying your patch, installing the linux-headers package (I’m using arch) and installing the patched driver, I’m very happy to say that it worked!
Thank you so much for your help!

Here you can see “Synchronization: ON”, using the 5.4.8 kernel and NVIDIA Driver 440.44: https://imgur.com/a/hGHUAKq

I’m glad it seems to have worked (so far, at least). It’s always nice when a fix can help, and especially along with some real feedback. Thank you.

Also, for anyone else giving this a try: I’ve updated my initial post with a link to a snippet on my GitLab. That might make it easier to download and apply.

[quote=""]

Hi there! I just tested your patch as well and it does work for kernel 5.4.8! But doesn’t in kernel 5.5-rc4, in nvidia-settings Synchronization is Off. Thank you for making this anyway, nvidia takes too long haha!

I haven’t, from my admittedly very quick and rough search, found any major changes in the PRIME department between 5.4 and 5.5. So I’m not sure what to make of that, I’m afraid. Also, seeing as I don’t run 5.5 myself, nor do I have a station to try PRIME myself on, I probably won’t be able to help with this. Sorry about that.
Maybe someone else will pick up the torch for here, or, even better, NVIDIA might just release an updated and compatible driver soon.

I just got it working in a few steps. I use arch Linux with NVIDIA-xrun to run bspwm. Prime sync working great after patch. Here is what I did:

  1. Download and patch the driver
  2. Uninstall nvidia and nvidia-xstart
  3. Reboot
  4. Install patched nvidia driver
  5. Reboot
  6. Install nvidia-xstart and jump in!

As someone who despises screen tearing, this fix was a life saver!

Thanks for the patch :) Vsync works again on Arch with kernel 5.4.8, and nvidia-dkms-440.44-9

i can confirm the patch works on manjaro’s 5.4 as well as arch 5.4 . thanks.

Has anyone tried the Vulkan beta driver which supposedly properly supports kernel 5.4+ with prime?

I’m on 5.4.12 kernel and 440.48.02 vlk beta driver.

PrimeSync works as intended.

That’s great to know. Hopefully they’ll soon release an update for the 440.44 series.

your able to run vulkan games with prime sync/vsync enabled without getting the lockup loop issue?

also, is the 440.48* driver available somewhere, i dont see it listed on nvidia.com?

I’d like to know if the lockup loop issue is fixed as well, and if works properly with suspend/resume.

The 440.48.* driver is a vulkan driver (basically the same, but more vulkan extensions enabled I guess) which you can download from:
https://developer.nvidia.com/vulkan-driver

440.48 supports my maxwell 960m but even though lsmod shows the driver is loading and nvidia-smi shows the correct version i can not get xorg to start and just locks up requiring hard reset.

i noticed in /usr/src/nvidia-440.48.02/nvidia-uvm/hwref there is no maxwell directory, only turing, volta, pascal, and kepler so maybe left out by mistake?

all 430.xx, 435.xx, 440.xx (except for 440.48) all build and work fine.