FreeBSD 12 driver v430 "EQ Overload" with 1660 Ti

Hello,

I am having problems with the 1660Ti card (MSI Armor OC) on FreeBSD.

  • I’m Running FreeBSD 12.0 with generic kernel (also tried custom, without option VESA). I have switched off my on-board graphics in my AsRock AB350 bios. My primary display is on an nVidia 1660Ti with HDMI. I was using a kvm switch ( Belkin Flip ) but have subsequently plugged both mouse and keyboard directly into PC and this does not solve the problem.

  • The 430 driver was installed from a patched version of nvidia-driver in the /usr/ports tree. The patch was taken from this bug report page: 232645 – x11/nvidia-driver: Update to 410.78 (New GPU support), Create x11/nvidia-driver-390

  • I’m running nvidia_modeset and nvidia kernel modules (specified in /boot/loader.conf) and dbus and hald (specified in rc.conf). Also, I am using the xorg.conf generated automatically by nvidia-xconfig (430). I’m using vga textmode (in loader.conf) but it doesn’t make any difference with/without.

  • I have switched off the 2400G APU graphics in the BIOS. I have also switched off IOMMU in the BIOS ( it doesn’t make any difference either way ).

  • I have tried reseating the card. Also, my PSU is 750W, which is around double the minimum required.

  • When I run X. I usually have around 10 seconds before it crashes with a screen freeze.

I attach /var/log/messages and Xorg.0.log below. The relevant line is “Driver has fallen of the bus.”

I’ve tried various configurations of X and I note that X runs fine using the VESA driver.

Some discussion on this was carried out in the FreeBSD forum: EQ overflow FreeBSD 12.0 + nVidia 1660Ti with 430 driver + Ryzen 2400G | The FreeBSD Forums

It appears to be specific to my setup. My hardware runs stably, under load, in Win10.

Thanks,

Rob S.

From /var/log/messages:

Jul 3 00:19:00 robs-pc kernel: NVRM: GPU at PCI:0000:10:00: GPU-890b60a8-d9b3-824a-784b-648e84db328b
Jul 3 00:19:00 robs-pc kernel: NVRM: GPU Board Serial Number:
Jul 3 00:19:00 robs-pc kernel: NVRM: Xid (PCI:0000:10:00): 79, GPU has fallen off the bus.
Jul 3 00:19:00 robs-pc kernel: NVRM: GPU 0000:10:00.0: GPU has fallen off the bus.
Jul 3 00:19:00 robs-pc kernel: NVRM: GPU 0000:10:00.0: GPU is on Board .
Jul 3 00:19:00 robs-pc kernel: NVRM: A GPU crash dump has been created. If possible, please run
Jul 3 00:19:00 robs-pc kernel: NVRM: nvidia-bug-report.sh as root to collect this data before
Jul 3 00:19:00 robs-pc kernel: NVRM: the NVIDIA kernel module is unloaded.
Jul 3 00:19:04 robs-pc kernel: uhub_reattach_port: giving up port reset - device vanished
Jul 3 00:19:16 robs-pc syslogd: last message repeated 10 times
Jul 3 00:19:17 robs-pc kernel: nvidia-modeset: ERROR: GPU:0: Failed to query display engine channel state: 0x0000c57d:0:0:0x0000000f
Jul 3 00:19:17 robs-pc kernel: nvidia-modeset: ERROR: GPU:0: Failed to query display engine channel state: 0x0000c57e:1:0:0x0000000f
Jul 3 00:19:17 robs-pc kernel: nvidia-modeset: ERROR: GPU:0: Failed to query display engine channel state: 0x0000c57e:0:0:0x0000000f
Jul 3 00:19:17 robs-pc kernel: nvidia-modeset: ERROR: GPU:0: Failed to query display engine channel state: 0x0000c57e:3:0:0x0000000f
Jul 3 00:19:17 robs-pc kernel: nvidia-modeset: ERROR: GPU:0: Failed to query display engine channel state: 0x0000c57e:5:0:0x0000000f
Jul 3 00:19:17 robs-pc kernel: nvidia-modeset: ERROR: GPU:0: Failed to query display engine channel state: 0x0000c57e:7:0:0x0000000f
Jul 3 00:19:17 robs-pc kernel: nvidia-modeset: ERROR: GPU:0: Failed to query display engine channel state: 0x0000c57d:0:0:0x0000000f
Jul 3 00:19:17 robs-pc kernel: nvidia-modeset: ERROR: GPU:0: Failed to query display engine channel state: 0x0000c57e:1:0:0x0000000f
Jul 3 00:19:17 robs-pc kernel: nvidia-modeset: ERROR: GPU:0: Failed to query display engine channel state: 0x0000c57e:0:0:0x0000000f
Jul 3 00:19:17 robs-pc kernel: nvidia-modeset: ERROR: GPU:0: Failed to query display engine channel state: 0x0000c57e:3:0:0x0000000f
Jul 3 00:19:17 robs-pc kernel: nvidia-modeset: ERROR: GPU:0: Failed to query display engine channel state: 0x0000c57e:5:0:0x0000000f
Jul 3 00:19:17 robs-pc kernel: nvidia-modeset: ERROR: GPU:0: Failed to query display engine channel state: 0x0000c57e:7:0:0x0000000f
Jul 3 00:19:17 robs-pc kernel: nvidia-modeset: ERROR: GPU:0: Failed to query display engine channel state: 0x0000c57e:0:0:0x0000000f
Jul 3 00:19:17 robs-pc kernel: nvidia-modeset: ERROR: GPU:0: Failed to query display engine channel state: 0x0000c57e:2:0:0x0000000f
Jul 3 00:19:17 robs-pc kernel: nvidia-modeset: ERROR: GPU:0: Failed to query display engine channel state: 0x0000c57e:4:0:0x0000000f
Jul 3 00:19:17 robs-pc kernel: nvidia-modeset: ERROR: GPU:0: Failed to query display engine channel state: 0x0000c57e:6:0:0x0000000f
Jul 3 00:19:17 robs-pc kernel: uhub_reattach_port: giving up port reset - device vanished
Jul 3 00:19:48 robs-pc syslogd: last message repeated 25 times
Jul 3 00:19:49 robs-pc devd[721]: check_clients: dropping disconnected client
Jul 3 00:19:50 robs-pc kernel: uhub_reattach_port: giving up port reset - device vanished

From xorg.0.log:

(EE) [mi] EQ overflowing. Additional events will be discarded until existing events are processed.
(EE)
(EE) Backtrace:
(EE) 0: /usr/local/bin/X (?+0x0) [0x3dd360]
(EE) 1: /usr/local/bin/X (?+0x0) [0x2a1d30]
(EE) 2: /usr/local/bin/X (?+0x0) [0x2de8b0]
(EE) 3: /usr/local/lib/xorg/modules/input/mouse_drv.so (?+0x0) [0xe06a25990]
(EE) 4: /usr/local/lib/xorg/modules/input/mouse_drv.so (?+0x0) [0xe06a22e10]
(EE) 5: /usr/local/lib/xorg/modules/input/mouse_drv.so (?+0x0) [0xe06a21e90]
(EE) 6: /usr/local/bin/X (?+0x0) [0x2cf780]
(EE) 7: /usr/local/bin/X (?+0x0) [0x2f3030]
(EE) 8: /lib/libthr.so.3 (pthread_sigmask+0x536) [0x800ae9916]
(EE) 9: /lib/libthr.so.3 (pthread_getspecific+0xe12) [0x800ae96f2]
(EE) 10: ? (?+0xe12) [0x7fffffffee15]
(EE) 11: /usr/local/lib/xorg/modules/drivers/nvidia_drv.so (nvidiaAddDrawableHandler+0x52a3a) [0x80230dae4]
(EE)
(EE) [mi] These backtraces from mieqEnqueue may point to a culprit higher up the stack.
(EE) [mi] mieq is NOT the cause. It is a victim.
(EE) [mi] EQ overflow continuing. 100 events have been dropped.
(EE)