GNU/Linux Debian 11 nvidia-drm driver errors (version 460.73.01)

Hello, I’m currently using an Nvidia GeForce GTX 1660 Ti with proprietary drivers on my laptop (+ Intel GPU, PRIME render offload enabled and functional) and systematically at boot time, I get the following errors:

[   20.274846] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x0000000a
[   20.274851] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x0000001f
[   20.274853] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000016
[   20.274855] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000012
[   20.274856] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x0000002a
[   20.274858] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000004
[   20.274859] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000020
[   20.274861] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000022
[   20.274862] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000021
[   20.275354] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000004

I’m currently running a 5.10.28 custom kernel, but I also had the errors with the previous (Debian mainstream) kernel.

Please use options nvidia-drm modeset=1 and build initramfs again to see if it fixes issue.

Hello, I used a file in /etc/modprobe.d in order to apply modeset=1 at boot time and did update-initramfs -u but after reboot the issue is still there.

$ sudo cat /sys/module/nvidia_drm/parameters/modeset 
Y
$ sudo dmesg |grep nvidia
[ ... ]
[   14.855730] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x0000000a
[   14.855734] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x0000001f
[   14.855736] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000016
[   14.855738] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000012
[   14.855740] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x0000002a
[   14.855741] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000004
[   14.855743] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000020
[   14.855744] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000022
[   14.855746] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000021
[   14.856515] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000004

Hello, issue still there… :-/

[mar. mai 11 00:20:05 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x0000000a
[mar. mai 11 00:20:05 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x0000001f
[mar. mai 11 00:20:05 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000016
[mar. mai 11 00:20:05 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000012
[mar. mai 11 00:20:05 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x0000002a
[mar. mai 11 00:20:05 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000004
[mar. mai 11 00:20:05 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000020
[mar. mai 11 00:20:05 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000022
[mar. mai 11 00:20:05 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000021
[mar. mai 11 00:20:05 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000004

Nothing new on your side ?

Hello, still the same error messages, apparently more and more frequent. I’m currently running kernel 5.10.0-8-amd64 (Debian Testing) and Nvidia drivers 460.73.01. The traces are now appearing just next to those:

[ven. juil. 30 02:50:20 2021] ------------[ cut here ]------------
[ven. juil. 30 02:50:20 2021] WARNING: CPU: 9 PID: 1124587 at /var/lib/dkms/nvidia-current/460.73.01/build/nvidia-drm/nvidia-drm-drv.c:530 nv_drm_master_set+0x22/0x30 [nvidia_drm]
[ven. juil. 30 02:50:20 2021] Modules linked in: nf_log_ipv4 nf_log_common nft_counter xt_LOG nft_compat nvidia_uvm(POE) rndis_host cdc_ether usbnet mii sr_mod sg uas usb_storage tcp_diag inet_diag uvcvideo videobuf2_vmalloc snd_usb_audio videobuf2_memops videobuf2_v4l2 videobuf2_common snd_usbmidi_lib videodev snd_rawmidi snd_seq_device mc vmnet(OE) parport_pc vmmon(OE) cpuid btrfs blake2b_generic ufs qnx4 hfsplus hfs cdrom minix msdos jfs xfs dm_mod overlay vmw_vsock_vmci_transport vsock vmw_vmci ctr ccm rfcomm dummy nf_conntrack_netlink nf_conntrack nf_defrag_ipv6 cpufreq_powersave cpufreq_ondemand cmac cpufreq_conservative cpufreq_userspace algif_hash algif_skcipher lz4 af_alg zram bnep zsmalloc nf_defrag_ipv4 nvidia_drm(POE) x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel snd_hda_codec_realtek snd_hda_codec_generic kvm nvidia_modeset(POE) irqbypass rapl intel_cstate mei_hdcp intel_rapl_msr snd_sof_pci snd_sof_intel_byt snd_sof_intel_ipc snd_sof_intel_hda_common snd_sof_xtensa_dsp snd_sof
[ven. juil. 30 02:50:20 2021]  snd_sof_intel_hda ledtrig_audio snd_soc_skl snd_hda_codec_hdmi snd_soc_hdac_hda i915 snd_hda_ext_core snd_soc_sst_ipc snd_soc_sst_dsp snd_soc_acpi_intel_match iwlmvm snd_soc_acpi snd_hda_intel snd_intel_dspcfg soundwire_intel btusb soundwire_generic_allocation mac80211 btrtl snd_soc_core btbcm btintel nls_ascii nvidia(POE) bluetooth nls_cp437 intel_uncore snd_compress vfat soundwire_cadence fat snd_hda_codec snd_hda_core libarc4 asus_nb_wmi asus_wmi pcspkr serio_raw snd_hwdep sparse_keymap soundwire_bus efi_pstore wmi_bmof iwlwifi snd_pcm mxm_wmi jitterentropy_rng snd_timer snd iTCO_wdt intel_pmc_bxt iTCO_vendor_support cfg80211 watchdog ee1004 mei_me drbg joydev drm_kms_helper evdev soundcore ansi_cprng hid_multitouch mei ecdh_generic ecc cec rfkill processor_thermal_device intel_rapl_common i2c_algo_bit intel_soc_dts_iosf intel_pch_thermal int3403_thermal int340x_thermal_zone ac tpm_crb tpm_tis tpm_tis_core tpm rng_core int3400_thermal intel_pmc_core acpi_pad
[ven. juil. 30 02:50:20 2021]  acpi_thermal_rel acpi_tad asus_wireless button vhost_net tun vhost vhost_iotlb nf_tables tap msr drm nfnetlink sunrpc ppdev lp parport fuse configfs binfmt_misc efivarfs ip_tables x_tables autofs4 ext4 crc16 mbcache jbd2 hid_logitech_hidpp raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor hid_logitech_dj usbhid raid6_pq libcrc32c crc32c_generic raid1 raid0 multipath linear md_mod hid_generic crc32_pclmul crc32c_intel ghash_clmulni_intel xhci_pci xhci_hcd nvme ahci libahci libata usbcore r8169 aesni_intel nvme_core realtek scsi_mod mdio_devres libaes crypto_simd libphy t10_pi crc_t10dif cryptd crct10dif_generic glue_helper i2c_hid intel_lpss_pci i2c_i801 intel_lpss hid crct10dif_pclmul i2c_smbus crct10dif_common idma64 usb_common battery wmi video [last unloaded: vmnet]
[ven. juil. 30 02:50:20 2021] CPU: 9 PID: 1124587 Comm: Xorg Tainted: P     U  W  OE     5.10.0-8-amd64 #1 Debian 5.10.46-2
[ven. juil. 30 02:50:20 2021] Hardware name: ASUSTeK COMPUTER INC. ROG Strix G731GU_G731GU/G731GU, BIOS G731GU.312 02/19/2021
[ven. juil. 30 02:50:20 2021] RIP: 0010:nv_drm_master_set+0x22/0x30 [nvidia_drm]
[ven. juil. 30 02:50:20 2021] Code: 14 76 bc ef 0f 1f 40 00 0f 1f 44 00 00 48 8b 47 48 48 8b 78 20 48 8b 05 0c 5d 00 00 48 8b 40 28 e8 63 79 f1 ef 84 c0 74 01 c3 <0f> 0b c3 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 80 3d 4c
[ven. juil. 30 02:50:20 2021] RSP: 0018:ffffa6a4c1a97bd0 EFLAGS: 00010246
[ven. juil. 30 02:50:20 2021] RAX: 0000000000000000 RBX: ffff949721db9e00 RCX: 0000000000000008
[ven. juil. 30 02:50:20 2021] RDX: ffffffffc330be98 RSI: 0000000000000292 RDI: ffffffffc330be60
[ven. juil. 30 02:50:20 2021] RBP: ffff9497ba5fc240 R08: 0000000000000008 R09: ffffa6a4c1a97bb8
[ven. juil. 30 02:50:20 2021] R10: ffff949721db9e00 R11: 0000000000000000 R12: ffff9496637bd800
[ven. juil. 30 02:50:20 2021] R13: 0000000000000000 R14: ffff9496637bd800 R15: 0000000062cb76a8
[ven. juil. 30 02:50:20 2021] FS:  00007fb72d391a40(0000) GS:ffff9499adc40000(0000) knlGS:0000000000000000
[ven. juil. 30 02:50:20 2021] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ven. juil. 30 02:50:20 2021] CR2: 0000556843a403b8 CR3: 0000000271df4004 CR4: 00000000003706e0
[ven. juil. 30 02:50:20 2021] Call Trace:
[ven. juil. 30 02:50:20 2021]  drm_new_set_master+0x7a/0x100 [drm]
[ven. juil. 30 02:50:20 2021]  drm_master_open+0x68/0x90 [drm]
[ven. juil. 30 02:50:20 2021]  drm_open+0xf8/0x250 [drm]
[ven. juil. 30 02:50:20 2021]  drm_stub_open+0xab/0x130 [drm]
[ven. juil. 30 02:50:20 2021]  chrdev_open+0xed/0x230
[ven. juil. 30 02:50:20 2021]  ? cdev_device_add+0x90/0x90
[ven. juil. 30 02:50:20 2021]  do_dentry_open+0x14b/0x360
[ven. juil. 30 02:50:20 2021]  path_openat+0xb82/0x1080
[ven. juil. 30 02:50:20 2021]  ? inotify_handle_inode_event+0x1c0/0x1f0
[ven. juil. 30 02:50:20 2021]  do_filp_open+0x88/0x130
[ven. juil. 30 02:50:20 2021]  ? getname_flags.part.0+0x29/0x1a0
[ven. juil. 30 02:50:20 2021]  ? __check_object_size+0x136/0x150
[ven. juil. 30 02:50:20 2021]  do_sys_openat2+0x97/0x150
[ven. juil. 30 02:50:20 2021]  __x64_sys_openat+0x54/0x90
[ven. juil. 30 02:50:20 2021]  do_syscall_64+0x33/0x80
[ven. juil. 30 02:50:20 2021]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ven. juil. 30 02:50:20 2021] RIP: 0033:0x7fb72d8e2767
[ven. juil. 30 02:50:20 2021] Code: 25 00 00 41 00 3d 00 00 41 00 74 47 64 8b 04 25 18 00 00 00 85 c0 75 6b 44 89 e2 48 89 ee bf 9c ff ff ff b8 01 01 00 00 0f 05 <48> 3d 00 f0 ff ff 0f 87 95 00 00 00 48 8b 4c 24 28 64 48 2b 0c 25
[ven. juil. 30 02:50:20 2021] RSP: 002b:00007ffe1a2b5ef0 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
[ven. juil. 30 02:50:20 2021] RAX: ffffffffffffffda RBX: 00000000ffffffff RCX: 00007fb72d8e2767
[ven. juil. 30 02:50:20 2021] RDX: 0000000000080002 RSI: 0000556843a3e980 RDI: 00000000ffffff9c
[ven. juil. 30 02:50:20 2021] RBP: 0000556843a3e980 R08: 0000000000000031 R09: 0000000000000000
[ven. juil. 30 02:50:20 2021] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000080002
[ven. juil. 30 02:50:20 2021] R13: 0000556843a3e980 R14: 0000556843a3e980 R15: 0000556843a2e750
[ven. juil. 30 02:50:20 2021] ---[ end trace 6d373fcdc7291b75 ]---
[ven. juil. 30 02:50:23 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x0000000a
[ven. juil. 30 02:50:23 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x0000001f
[ven. juil. 30 02:50:23 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000016
[ven. juil. 30 02:50:23 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000012
[ven. juil. 30 02:50:23 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x0000002a
[ven. juil. 30 02:50:23 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000004
[ven. juil. 30 02:50:23 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000020
[ven. juil. 30 02:50:23 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000022
[ven. juil. 30 02:50:23 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000021
[ven. juil. 30 02:50:23 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000004
[ven. juil. 30 02:50:23 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x0000000a
[ven. juil. 30 02:50:23 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x0000001f
[ven. juil. 30 02:50:23 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000016
[ven. juil. 30 02:50:23 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000012
[ven. juil. 30 02:50:23 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x0000002a
[ven. juil. 30 02:50:23 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000004
[ven. juil. 30 02:50:23 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000020
[ven. juil. 30 02:50:23 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000022
[ven. juil. 30 02:50:23 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000021
[ven. juil. 30 02:50:23 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000004

New Nvidia driver (460.91.03), reboot and… same old messages

[mar. août  3 02:18:33 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x0000000a
[mar. août  3 02:18:33 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x0000001f
[mar. août  3 02:18:33 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000016
[mar. août  3 02:18:33 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000012
[mar. août  3 02:18:33 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x0000002a
[mar. août  3 02:18:33 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000004
[mar. août  3 02:18:33 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000020
[mar. août  3 02:18:33 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000022
[mar. août  3 02:18:33 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000021
[mar. août  3 02:18:33 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000004
[mar. août  3 02:18:33 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x0000000a
[mar. août  3 02:18:33 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x0000001f
[mar. août  3 02:18:33 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000016
[mar. août  3 02:18:33 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000012
[mar. août  3 02:18:33 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x0000002a
[mar. août  3 02:18:33 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000004
[mar. août  3 02:18:33 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000020
[mar. août  3 02:18:33 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000022
[mar. août  3 02:18:33 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000021
[mar. août  3 02:18:33 2021] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000004

And the kernel backtrace (module nvidia-drm), of course…

[mar. août  3 02:18:31 2021] ------------[ cut here ]------------
[mar. août  3 02:18:31 2021] WARNING: CPU: 9 PID: 29017 at /var/lib/dkms/nvidia-current/460.91.03/build/nvidia-drm/nvidia-drm-drv.c:530 nv_drm_master_set+0x22/0x30 [nvidia_drm]
[mar. août  3 02:18:31 2021] Modules linked in: overlay vmnet(OE) vmw_vsock_vmci_transport vsock vmw_vmci vmmon(OE) rfcomm ctr ccm dummy nf_conntrack_netlink nf_conntrack nf_defrag_ipv6 cmac algif_hash cpufreq_powersave algif_skcipher cpufreq_ondemand af_alg nf_log_ipv4 cpufreq_conservative nf_log_common nft_counter bnep cpufreq_userspace lz4 zram xt_LOG zsmalloc nf_defrag_ipv4 nft_compat snd_hda_codec_hdmi nvidia_drm(POE) x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel snd_hda_codec_realtek snd_hda_codec_generic mei_hdcp intel_rapl_msr kvm nvidia_modeset(POE) irqbypass rapl intel_cstate snd_sof_pci snd_sof_intel_byt snd_sof_intel_ipc snd_sof_intel_hda_common snd_sof_xtensa_dsp snd_sof snd_sof_intel_hda ledtrig_audio snd_soc_skl iwlmvm nvidia(POE) snd_soc_hdac_hda nls_ascii snd_hda_ext_core mac80211 snd_soc_sst_ipc nls_cp437 i915 snd_soc_sst_dsp vfat snd_soc_acpi_intel_match snd_soc_acpi fat btusb btrtl btbcm btintel bluetooth snd_hda_intel snd_intel_dspcfg libarc4 soundwire_intel intel_uncore
[mar. août  3 02:18:31 2021]  soundwire_generic_allocation snd_soc_core uvcvideo snd_usb_audio videobuf2_vmalloc snd_compress videobuf2_memops videobuf2_v4l2 soundwire_cadence jitterentropy_rng iwlwifi videobuf2_common snd_hda_codec snd_usbmidi_lib videodev asus_nb_wmi snd_rawmidi asus_wmi serio_raw efi_pstore snd_hda_core drbg pcspkr snd_seq_device sparse_keymap wmi_bmof snd_hwdep mc ansi_cprng mei_me cfg80211 mxm_wmi soundwire_bus joydev evdev snd_pcm drm_kms_helper iTCO_wdt intel_pmc_bxt iTCO_vendor_support watchdog snd_timer ee1004 ecdh_generic mei ecc hid_multitouch snd cec processor_thermal_device i2c_algo_bit soundcore rfkill intel_rapl_common intel_pch_thermal intel_soc_dts_iosf int3403_thermal ac tpm_crb int340x_thermal_zone tpm_tis tpm_tis_core tpm rng_core int3400_thermal intel_pmc_core asus_wireless acpi_thermal_rel button acpi_pad acpi_tad vhost_net tun vhost vhost_iotlb tap nf_tables msr nfnetlink parport_pc drm ppdev lp parport fuse configfs binfmt_misc efivarfs ip_tables x_tables autofs4
[mar. août  3 02:18:31 2021]  ext4 crc16 mbcache jbd2 hid_logitech_hidpp raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor hid_logitech_dj usbhid raid6_pq libcrc32c crc32c_generic raid1 raid0 multipath linear md_mod hid_generic crc32_pclmul crc32c_intel ghash_clmulni_intel nvme r8169 nvme_core xhci_pci ahci t10_pi xhci_hcd libahci libata aesni_intel usbcore libaes crc_t10dif crypto_simd crct10dif_generic scsi_mod realtek mdio_devres libphy cryptd i2c_hid glue_helper i2c_i801 intel_lpss_pci hid intel_lpss crct10dif_pclmul i2c_smbus crct10dif_common idma64 usb_common battery wmi video
[mar. août  3 02:18:31 2021] CPU: 9 PID: 29017 Comm: Xorg Tainted: P     U     OE     5.10.0-8-amd64 #1 Debian 5.10.46-3
[mar. août  3 02:18:31 2021] Hardware name: ASUSTeK COMPUTER INC. ROG Strix G731GU_G731GU/G731GU, BIOS G731GU.312 02/19/2021
[mar. août  3 02:18:31 2021] RIP: 0010:nv_drm_master_set+0x22/0x30 [nvidia_drm]
[mar. août  3 02:18:31 2021] Code: 14 56 7b d6 0f 1f 40 00 0f 1f 44 00 00 48 8b 47 48 48 8b 78 20 48 8b 05 0c 5d 00 00 48 8b 40 28 e8 63 59 b0 d6 84 c0 74 01 c3 <0f> 0b c3 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 80 3d 4c
[mar. août  3 02:18:31 2021] RSP: 0018:ffffb237441b3bd0 EFLAGS: 00010246
[mar. août  3 02:18:31 2021] RAX: 0000000000000000 RBX: ffff9d580fe4f200 RCX: 0000000000000008
[mar. août  3 02:18:31 2021] RDX: ffffffffc3a86e98 RSI: 0000000000000292 RDI: ffffffffc3a86e60
[mar. août  3 02:18:31 2021] RBP: ffff9d5750249000 R08: 0000000000000008 R09: ffffb237441b3bb8
[mar. août  3 02:18:31 2021] R10: 0000000000000000 R11: 0000000000000000 R12: ffff9d5638360000
[mar. août  3 02:18:31 2021] R13: 0000000000000000 R14: ffff9d5638360000 R15: 00000000b08f3ea8
[mar. août  3 02:18:31 2021] FS:  00007f458adfaa40(0000) GS:ffff9d596dc40000(0000) knlGS:0000000000000000
[mar. août  3 02:18:31 2021] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[mar. août  3 02:18:31 2021] CR2: 00005567277b03b8 CR3: 00000002eef5a002 CR4: 00000000003706e0
[mar. août  3 02:18:31 2021] Call Trace:
[mar. août  3 02:18:31 2021]  drm_new_set_master+0x7a/0x100 [drm]
[mar. août  3 02:18:31 2021]  drm_master_open+0x68/0x90 [drm]
[mar. août  3 02:18:31 2021]  drm_open+0xf8/0x250 [drm]
[mar. août  3 02:18:31 2021]  drm_stub_open+0xab/0x130 [drm]
[mar. août  3 02:18:31 2021]  chrdev_open+0xed/0x230
[mar. août  3 02:18:31 2021]  ? cdev_device_add+0x90/0x90
[mar. août  3 02:18:31 2021]  do_dentry_open+0x14b/0x360
[mar. août  3 02:18:31 2021]  path_openat+0xb82/0x1080
[mar. août  3 02:18:31 2021]  ? inotify_handle_inode_event+0x1c0/0x1f0
[mar. août  3 02:18:31 2021]  do_filp_open+0x88/0x130
[mar. août  3 02:18:31 2021]  ? getname_flags.part.0+0x29/0x1a0
[mar. août  3 02:18:31 2021]  ? __check_object_size+0x136/0x150
[mar. août  3 02:18:31 2021]  do_sys_openat2+0x97/0x150
[mar. août  3 02:18:31 2021]  __x64_sys_openat+0x54/0x90
[mar. août  3 02:18:31 2021]  do_syscall_64+0x33/0x80
[mar. août  3 02:18:31 2021]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[mar. août  3 02:18:31 2021] RIP: 0033:0x7f458b34b767
[mar. août  3 02:18:31 2021] Code: 25 00 00 41 00 3d 00 00 41 00 74 47 64 8b 04 25 18 00 00 00 85 c0 75 6b 44 89 e2 48 89 ee bf 9c ff ff ff b8 01 01 00 00 0f 05 <48> 3d 00 f0 ff ff 0f 87 95 00 00 00 48 8b 4c 24 28 64 48 2b 0c 25
[mar. août  3 02:18:31 2021] RSP: 002b:00007ffc0967f880 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
[mar. août  3 02:18:31 2021] RAX: ffffffffffffffda RBX: 00000000ffffffff RCX: 00007f458b34b767
[mar. août  3 02:18:31 2021] RDX: 0000000000080002 RSI: 00005567277ae980 RDI: 00000000ffffff9c
[mar. août  3 02:18:31 2021] RBP: 00005567277ae980 R08: 0000000000000031 R09: 0000000000000000
[mar. août  3 02:18:31 2021] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000080002
[mar. août  3 02:18:31 2021] R13: 00005567277ae980 R14: 00005567277ae980 R15: 000055672779e750
[mar. août  3 02:18:31 2021] ---[ end trace 7bb274f70a7284b8 ]---

Hello, new driver version installed today (470.57.02-2 on kernel Debian 5.10.46-4), same old error messages, but now it does also concern nv_drm_gem_export_nvkms_memory_ioctl, as you may read below. I should precise that I switched back to a mainstream kernel, without any improvement…

[   39.598843] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x0000000a
[   39.598847] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x0000001f
[   39.598849] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000016
[   39.598850] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000012
[   39.598852] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x0000002a
[   39.598853] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000004
[   39.598855] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000020
[   39.598856] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000022
[   39.598857] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000021
[   39.599092] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000004
[  183.914627] [drm:nv_drm_gem_export_nvkms_memory_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup NVKMS gem object for export: 0x00000001
[ 1700.875580] [drm:nv_drm_gem_export_nvkms_memory_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup NVKMS gem object for export: 0x00000001

If it’s any help, here is more information:

$ __NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia __VK_LAYER_NV_optimus=NVIDIA_only glxinfo |head -20                      
name of display: :0.0
display: :0  screen: 0
direct rendering: Yes
server glx vendor string: NVIDIA Corporation
server glx version string: 1.4
server glx extensions:
    GLX_ARB_context_flush_control, GLX_ARB_create_context, 
    GLX_ARB_create_context_no_error, GLX_ARB_create_context_profile, 
    GLX_ARB_create_context_robustness, GLX_ARB_fbconfig_float, 
    GLX_ARB_multisample, GLX_EXT_buffer_age, 
    GLX_EXT_create_context_es2_profile, GLX_EXT_create_context_es_profile, 
    GLX_EXT_framebuffer_sRGB, GLX_EXT_import_context, GLX_EXT_libglvnd, 
    GLX_EXT_stereo_tree, GLX_EXT_swap_control, GLX_EXT_swap_control_tear, 
    GLX_EXT_texture_from_pixmap, GLX_EXT_visual_info, GLX_EXT_visual_rating, 
    GLX_NV_copy_image, GLX_NV_delay_before_swap, GLX_NV_float_buffer, 
    GLX_NV_multigpu_context, GLX_NV_robustness_video_memory_purge, 
    GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGI_swap_control, 
    GLX_SGI_video_sync
client glx vendor string: NVIDIA Corporation
client glx version string: 1.4
[ ... ]

And without PRIME Render Offload

$ glxinfo |head -20                                                                                               
name of display: :0.0
display: :0  screen: 0
direct rendering: Yes
server glx vendor string: SGI
server glx version string: 1.4
server glx extensions:
    GLX_ARB_create_context, GLX_ARB_create_context_no_error, 
    GLX_ARB_create_context_profile, GLX_ARB_create_context_robustness, 
    GLX_ARB_fbconfig_float, GLX_ARB_framebuffer_sRGB, GLX_ARB_multisample, 
    GLX_EXT_create_context_es2_profile, GLX_EXT_create_context_es_profile, 
    GLX_EXT_fbconfig_packed_float, GLX_EXT_framebuffer_sRGB, 
    GLX_EXT_import_context, GLX_EXT_libglvnd, GLX_EXT_no_config_context, 
    GLX_EXT_texture_from_pixmap, GLX_EXT_visual_info, GLX_EXT_visual_rating, 
    GLX_INTEL_swap_event, GLX_MESA_copy_sub_buffer, GLX_OML_swap_method, 
    GLX_SGIS_multisample, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, 
    GLX_SGIX_visual_select_group, GLX_SGI_make_current_read, 
    GLX_SGI_swap_control
client glx vendor string: Mesa Project and SGI
client glx version string: 1.4
client glx extensions:
[ ... ]

By the way, I also noticed these suspicious lines in file /var/log/Xorg.0.log, to be investigated, they might be related to a suspending event.

[  3652.535] (II) event2  - Asus Keyboard: device removed
[  3652.563] (II) AIGLX: Suspending AIGLX clients for VT switch
[  4309.778] (WW) NVIDIA(G0): Failed to set the display configuration
[  4309.778] (WW) NVIDIA(G0):  - Setting a mode on head 0 failed: Insufficient permissions
[  4309.779] (WW) NVIDIA(G0):  - Setting a mode on head 1 failed: Insufficient permissions
[  4309.779] (WW) NVIDIA(G0):  - Setting a mode on head 2 failed: Insufficient permissions
[  4309.779] (WW) NVIDIA(G0):  - Setting a mode on head 3 failed: Insufficient permissions
[  4309.779] (WW) NVIDIA(G0): Failed to set the display configuration
[  4309.779] (WW) NVIDIA(G0):  - Setting a mode on head 0 failed: Insufficient permissions
[  4309.779] (WW) NVIDIA(G0):  - Setting a mode on head 1 failed: Insufficient permissions
[  4309.779] (WW) NVIDIA(G0):  - Setting a mode on head 2 failed: Insufficient permissions
[  4309.779] (WW) NVIDIA(G0):  - Setting a mode on head 3 failed: Insufficient permissions
[  4309.779] (WW) NVIDIA(G0): Failed to set DPMS to standby
[  4369.780] (WW) NVIDIA(G0): Failed to set the display configuration
[  4369.780] (WW) NVIDIA(G0):  - Setting a mode on head 0 failed: Insufficient permissions
[  4369.780] (WW) NVIDIA(G0):  - Setting a mode on head 1 failed: Insufficient permissions
[  4369.780] (WW) NVIDIA(G0):  - Setting a mode on head 2 failed: Insufficient permissions
[  4369.780] (WW) NVIDIA(G0):  - Setting a mode on head 3 failed: Insufficient permissions
[  4369.780] (WW) NVIDIA(G0): Failed to set the display configuration
[  4369.780] (WW) NVIDIA(G0):  - Setting a mode on head 0 failed: Insufficient permissions
[  4369.780] (WW) NVIDIA(G0):  - Setting a mode on head 1 failed: Insufficient permissions
[  4369.780] (WW) NVIDIA(G0):  - Setting a mode on head 2 failed: Insufficient permissions
[  4369.780] (WW) NVIDIA(G0):  - Setting a mode on head 3 failed: Insufficient permissions
[  4369.780] (WW) NVIDIA(G0): Failed to set DPMS to off
[  6500.460] (II) AIGLX: Resuming AIGLX clients after VT switch
[  6500.463] (II) modeset(0): EDID vendor "AUO", prod id 18333
[  6500.463] (II) modeset(0): Using hsync ranges from config file 
[  6500.463] (II) modeset(0): Using vrefresh ranges from config file

Still the very same messages using Nvidia driver version 470.141.03:

[Fri Aug 26 11:44:42 2022] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x0000000a
[Fri Aug 26 11:44:42 2022] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x0000001f
[Fri Aug 26 11:44:42 2022] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000016
[Fri Aug 26 11:44:42 2022] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000012
[Fri Aug 26 11:44:42 2022] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x0000002a
[Fri Aug 26 11:44:42 2022] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000004
[Fri Aug 26 11:44:42 2022] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000020
[Fri Aug 26 11:44:42 2022] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000022
[Fri Aug 26 11:44:42 2022] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000021
[Fri Aug 26 11:44:42 2022] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000004
[Fri Aug 26 11:44:42 2022] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x0000000a
[Fri Aug 26 11:44:42 2022] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x0000001f
[Fri Aug 26 11:44:42 2022] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000016
[Fri Aug 26 11:44:42 2022] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000012
[Fri Aug 26 11:44:42 2022] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x0000002a
[Fri Aug 26 11:44:42 2022] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000004
[Fri Aug 26 11:44:42 2022] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000020
[Fri Aug 26 11:44:42 2022] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000022
[Fri Aug 26 11:44:42 2022] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000021
[Fri Aug 26 11:44:42 2022] [drm:nv_drm_gem_fence_attach_ioctl [nvidia_drm]] *ERROR* [nvidia-drm] [GPU ID 0x00000100] Failed to lookup gem object for fence attach: 0x00000004

Finally, I got those messages to disappear, simply by using nvidia_drv_video.so through LIBVA_DRIVER_NAME environment variable («nvidia» instead of «iHD». But it’s more a workaround than a real solution as it allows only hardware accelerated decoding, encoding is not supported yet, and after hibernation this VA-API driver isn’t functional anymore. As shown by

$ strace vainfo
[ ... ]
stat("/dev/nvidia-uvm", {st_mode=S_IFCHR|0666, st_rdev=makedev(0xee, 0), ...}) = 0
stat("/dev/nvidia-uvm-tools", {st_mode=S_IFCHR|0666, st_rdev=makedev(0xee, 0x1), ...}) = 0
openat(AT_FDCWD, "/dev/nvidia-uvm", O_RDWR|O_CLOEXEC) = -1 EIO (Input/output error)
openat(AT_FDCWD, "/dev/nvidia-uvm", O_RDWR) = -1 EIO (Input/output error)
ioctl(-5, _IOC(_IOC_NONE, 0, 0x2, 0x3000), 0) = -1 EBADF (Bad file descriptor)
ioctl(5, _IOC(_IOC_READ|_IOC_WRITE, 0x46, 0x29, 0x10), 0x7ffc15f05b30) = 0
close(5)                                = 0
getpid()                                = 866807
exit_group(1)                           = ?
+++ exited with 1 +++

Regarding GPU load, nvtop does not show any meaningful difference between both drivers. This driver makes use of NVDEC as backend but is not an official Nvidia driver, see GitHub - elFarto/nvidia-vaapi-driver: A VA-API implemention using NVIDIA's NVDEC