FreeBSD 14.2 / Xorg hardware acceleration is not working on RTX 5070 Ti

Hello, i’m running FreeBSD 14.2p3 with RTX 5070 Ti and whenever i’m using a browser or trying to playback any video file, x11 starts stuttering. All X11 windows refresh rate drops to around 5 seconds, which makes desktop completely unusable. The issues occurs with official driver 570.144 as well with the driver from the BSD package “nvidia-driver-570.124.04.1402000” ( 2025Q2 ).

In the system console logs there are repeating Xid 16 errors:

NVRM: Xid (PCI:0000:01:00): 16, Head 00000003 Cound 000012a3
NVRM: Xid (PCI:0000:01:00): 16, Head 00000003 Cound 000012a4
NVRM: Xid (PCI:0000:01:00): 16, Head 00000003 Cound 000012a5

As well as locking error

[  1907.793] nvLock: client timed out, taking the lock
[  2626.289] (WW) NVIDIA: Wait for channel idle timed out.
[  2642.502] (WW) NVIDIA: Wait for channel idle timed out.

If I swap the GPU to RTX 4070 using the very same setup/driver, the issue is not present. Xorg logs and nvidia-smi outputs for 5070Ti and 4070 are attached.

I’ve tried to disable GSP firmware for 5070 Ti using hw.nvidia.registry.EnableGpuFirmware=0 flag in /boot/loader.conf, but this locks up the system when nvidia-smi and startx is used.

It looks like there is an issue with the driver firmware for 5070Ti, or in the driver logic that handles Blackwell GPUs.

Setup:
Motherboard: Gigabyte Aorus elite AX rev.1.2 (BIOS version: FB4)
GPU: Nvidia RTX 5070 Ti
CPU: AMD Ryzen 7900X
Display: LG C3 OLED TV (connected directly via HDMI)

Software:
nvidia-driver-570.124.04.1402000
xorg-server-21.1.16,1

How to reproduce:
Playing any video file using mpv on FreeBSD 14.2p3 using 5070Ti should reproduce the bug. On my setup, bug reproduces in 100% cases.

4070_nvidia-smi.log (10.8 KB)
4070_Xorg.0.log (12.7 KB)
5070Ti_nvidia-smi.log (10.4 KB)
5070Ti_Xorg.0.log (12.8 KB)

1 Like

BTW, I’ve seen some GLX/EGL fixes in the release notes of the 575.51.02 beta driver, but I’m unable to use this driver due to the following error:

NVRM: failed to wait for bar firewall to lower

Disabling resizable bar in the BIOS does not help.

I have exactly the same issue, same card: https://forums.developer.nvidia.com/t/rtx-5070-ti-hangs-with-any-transparent-windows-and-or-vulkan-apps-on-freebsd/334937

Are you still having the same issue?

Just re-tested on FreeBSD 14.3 with the latest release driver 575.64. The issue is still present.

There is a FreeBSD bug tracker page where people reporting similar issues with Blackwell GPUs:

Btw, what GPU vendor you are using? Mine is Gigabyte (GV-N507TEAGLE OC-16GD).
I’m curious if the issue is vendor specific. For example, what if GPU BIOS has vendor specific code, or different memory mapping for parameters, or flags related to Vulkan that can trigger the issue.

What about your experience ?

Nah, same here with the current 570.169.

I think I’ve been one of the first to try a 50 series with FreeBSD: https://forums.freebsd.org/threads/xorg-wont-start-with-officially-supported-nvidia-5070-gpu.97659/page-3

When the issue was first reported on the FreeBSD forums, I had a Gigabyte RTX 5070Ti Windforce OC 16GB (GV-N507TWF3OC-16GD.

I currently have a ASUS TUF 5070 Ti (2C05-300-A1), VBIOS 98.03.3B.80.2F and the issue is the same. I don’t think it is vendor specific, but rather model specific. Not only that, but I have it from a good source that the issue is not present on a 5060.

1 Like

It looks like when the issue is triggered, the PCIe speed is dropped to PCIe Gen1. When the Vulkan apps are not used, I don’t see the PCIe speed downgrade. The speed downgrade can be observed in nvidia-smq -q output.

Before the issue

GPU Link Info
	PCIe Generation
		Max                       : 5
		Current                   : 2
		Device Current            : 2
		Device Max                : 5
		Host Max                  : 5
	Link Width
		Max                       : 16x
		Current                   : 16x

After the issue

GPU Link Info
	PCIe Generation
		Max                       : 5
		Current                   : 1
		Device Current            : 1
		Device Max                : 5
		Host Max                  : 5
	Link Width
		Max                       : 16x
		Current                   : 16x

Also found that the max/host PCIe speeds are not correctly detected by default. On my setup the GPU and CPU is capable of PCIe Gen5, but the motherboard can only do Gen4. By switching speed from “auto” to “Gen4” in the motherboard BIOS, the nvidia-smi started to pick up the correct Gen4 value.

GPU Link Info
	PCIe Generation
		Max                       : 4
		Current                   : 2
		Device Current            : 2
		Device Max                : 5
		Host Max                  : 4
	Link Width
		Max                       : 16x
		Current                   : 16x

However this did not solve the issue: system is still stuttering and link is downgraded to Gen1.

Later, I’ve downgraded PCIe speed to Gen1 just test the bandwidth. It is not the limiting factor. Stuttering is caused by something else.

Interesting. Mine seems to detect the correct speeds, but like you said, bandwidth is not the limiting factor.
Before the issue:

        GPU Link Info
            PCIe Generation
                Max                       : 5
                Current                   : 5
                Device Current            : 5
                Device Max                : 5
                Host Max                  : 5
            Link Width
                Max                       : 16x
                Current                   : 16x

But after it’is exactly the same:

GPU Link Info
	PCIe Generation
		Max                       : 5
		Current                   : 1
		Device Current            : 1
		Device Max                : 5
		Host Max                  : 5
	Link Width
		Max                       : 16x
		Current                   : 16x

The moment the issue occurs, the core and memory clocks are jumping to the maximum values and the number of active SM units drops to zero. Looks like something is crashing or halting inside the GPU firmware.

nvidia-smi dmon -s pucvmet

# gpu    pwr  gtemp  mtemp     sm    mem    enc    dec    jpg    ofa   mclk   pclk  pviol  tviol     fb   bar1   ccpm  sbecc  dbecc    pci  rxpci  txpci 
# Idx      W      C      C      %      %      %      %      %      %    MHz    MHz      %   bool     MB     MB     MB   errs   errs   errs   MB/s   MB/s 
    0     35     46      -     18     12      0      0      0      0    810    173      0      0    803     13      0      -      -      0      5      2 
    0     35     46      -     18     12      0      0      0      0    810    164      0      0    803     13      0      -      -      0      3      3 
    0     35     46      -     18     12      0      0      0      0    810    175      0      0    803     13      0      -      -      0      2      2 
    0     35     46      -     28     19      0      0      0      0    810    177      0      0    803     13      0      -      -      0      3      2 
    0     35     46      -     31     21      0      0      0      0    810   1485      0      0    803     13      0      -      -      0      1      1 
    0     37     46      -     28     19      0      0      0      0  14001   1515      0      0    920     17      0      -      -      0     64     64 
    0     57     47      -      8      1      0      0      0      0  14001   2850      0      0   1438     20      0      -      -      0     37     18 
    0     56     47      -      0      0      0      0      0      0  14001   2850      0      0   1438     20      0      -      -      0     38     17 
    0     50     46      -      0      0      0      0      0      0  14001   2535      0      0   1438     20      0      -      -      0     37     18 
    0     45     46      -      0      0      0      0      0      0  14001   2535      0      0   1438     20      0      -      -      0     38     17 
    0     43     46      -      0      0      0      0      0      0   7001   2137      0      0   1438     20      0      -      -      0     33     17 
    0     35     45      -      0      1      0      0      0      0    810    382      0      0   1438     20      0      -      -      0     29     14 
    0     25     45      -      0      4      0      0      0      0    810    187      0      0   1438     20      0      -      -      0     27     13 
    0     25     45      -      0      8      0      0      0      0    405    187      0      0   1438     20      0      -      -      0     27     12 
    0     25     45      -      0      7      0      0      0      0    405    270      0      0   1438     20      0      -      -      0     27     13 
    0     25     45      -      0      7      0      0      0      0    405    180      0      0   1438     20      0      -      -      0     27     13 
    0     24     45      -      0      0      0      0      0      0    405    187      0      0   1438     20      0      -      -      0     27     13 
    0     24     45      -      0      1      0      0      0      0    405    180      0      0   1438     20      0      -      -      0     34     13 
    0     24     45      -      0      0      0      0      0      0    405    180      0      0   1438     20      0      -      -      0     27     13 
    0     24     45      -      0      0      0      0      0      0    405    180      0      0   1438     20      0      -      -      0     27     13 
    0     24     45      -      0      0      0      0      0      0    405    180      0      0   1438     20      0      -      -      0     42     13 
    0     25     45      -      0      6      0      0      0      0    405    187      0      0   1406     20      0      -      -      0     26     13 

Tried to limit the clocks, but it didn’t help

sudo nvidia-smi -pm 1
sudo nvidia-smi --lock-gpu-clocks=100,800
sudo nvidia-smi --lock-memory-clocks=100,1000
nvidia-smi dmon -s pucvmet

# gpu    pwr  gtemp  mtemp     sm    mem    enc    dec    jpg    ofa   mclk   pclk  pviol  tviol     fb   bar1   ccpm  sbecc  dbecc    pci  rxpci  txpci 
# Idx      W      C      C      %      %      %      %      %      %    MHz    MHz      %   bool     MB     MB     MB   errs   errs   errs   MB/s   MB/s 
    0     26     38      -     32     23      0      0      0      0    405     58      0      0    769     13      0      -      -      0      5      1 
    0     26     38      -     38     25      0      0      0      0    405     59      2      0    769     13      0      -      -      0      3      3 
    0     26     38      -     39     26      0      0      0      0    405    472      1      0    769     13      0      -      -      0      2      6 
    0     26     38      -     28     22      0      0      0      0    405     60      2      0    769     13      0      -      -      0      9     30 
    0     26     38      -     29     12      0      0      0      0    810    795      7      0    830     15      0      -      -      0     34     27 
    0     26     38      -     43     10      0      0      0      0    810     99      0      0    886     17      0      -      -      0     24     28 
    0     26     38      -     23     14      0      0      0      0    810     98      2      0   1372     20      0      -      -      0    689     26 
    0     26     38      -      0      4      0      0      0      0    810    795      0      0   1372     20      0      -      -      0     29     13 
    0     26     38      -      0      4      0      0      0      0    810    795      0      0   1372     20      0      -      -      0     29     14 
    0     25     38      -      0      4      0      0      0      0    810    795      0      0   1372     20      0      -      -      0     27     13 
    0     25     38      -      0      8      0      0      0      0    405    180      0      0   1372     20      0      -      -      0     30     13 
    0     24     38      -      0      0      0      0      0      0    405    180      0      0   1372     20      0      -      -      0     26     13 
    0     24     38      -      0      1      0      0      0      0    405    180      0      0   1372     20      0      -      -      0     26     13 
    0     24     38      -      0      0      0      0      0      0    405    180      0      0   1372     20      0      -      -      0     27     14 

I hadn’t tried limiting the clocks yet, I guess I can rule that out.
On my logs, I have NVRM: XID (pciXXX) 16,, which indicates a display engine hang. Could this be related to missing/disabled ROPs on the card?

Checked number of ROPs right after buying the GPU, they were all in place. What about yours?

According to the card specifications yes, all there. My guess is the 5070 Ti share the same die of a 5080, but some ROPs have been disabled…hence my question.

Just checked hash sums of the GPU firmware files gsp_ga10x.bin for both FreeBSD and Linux drivers ( version 170.169 ). They are identical. If the firmware is addressing the wrong ROPs, then I believe this issue should be widely present on Linux, but seems like it is not the case. Also when booting into Windows, I don’t experience this behavior.

So basically, it’s not a firmware issue? Does that remove NVIDIA’s driver logic from the equation and points to an issue with FreeBSD implementation only, more specifically, LinuxKPI?

It’s only Nvidia engineers can tell for sure. IMO, if it was LinuxKPI then we likely to see this issue on 4000 series as well, but we don’t.

Yesterday I was trying to disable Vulkan on the system completely by renaming /usr/local/share/vulkan/icd.d directory and disabling Firefox hardware acceleration. System seemed stable, but if I switch between windows that utilizes graphics it sometimes trigger the issue. For me it looks like a race condition.

Btw, maybe you know is there is a way to setup debug logging in the nvidia driver if it has such thing?

Compiled driver from ports without Linux support and disable Linux compatibility layer in the /boot/loader.conf . The issue is still present.

That’s not going to make a difference: it’s baked in the driver using the LinuxKPI from the FreeBSD kernel; the Linux Compatibility layer allows you to use unmodified Linux binaries: LinuxKPI - FreeBSD Wiki

1 Like

You can debug Xorg, or eventually dtrace (dtrace & flamegraph - GitHub - brendangregg/FlameGraph: Stack trace visualizer). However, the driver debug symbols are not available, so the output will not be of much use.

Didn’t know that LinuxKPI is a standalone thing. Thanks for the info.

Tried to run mpv in the debugger to find what is triggering the issue, but without nvidia symbols it is useless. Anyway this is what I’ve scrapped.

Before the issue

  thread #13, name = 'vo'
    frame #0: 0x00000008386e45ba libc.so.7`__sys_ioctl + 10
    frame #1: 0x000000087b03c428 libnvidia-glcore.so.1`___lldb_unnamed_symbol46512 + 56
    frame #2: 0x000000087b03d830 libnvidia-glcore.so.1`___lldb_unnamed_symbol46533 + 112
    frame #3: 0x000000087b03ff9f libnvidia-glcore.so.1`___lldb_unnamed_symbol46549 + 239
    frame #4: 0x000000087b0402d5 libnvidia-glcore.so.1`___lldb_unnamed_symbol46550 + 21
    frame #5: 0x000000087af2c149 libnvidia-glcore.so.1`___lldb_unnamed_symbol44606 + 441
    frame #6: 0x000000087af28c1d libnvidia-glcore.so.1`___lldb_unnamed_symbol44506 + 813
    frame #7: 0x000000087af2a245 libnvidia-glcore.so.1`___lldb_unnamed_symbol44529 + 149
    frame #8: 0x0000000823ad1919 libplacebo.so.349`___lldb_unnamed_symbol1391 + 681
    frame #9: 0x0000000823ad8c8a libplacebo.so.349`___lldb_unnamed_symbol1406 + 218
    frame #10: 0x0000000823ae1a68 libplacebo.so.349`___lldb_unnamed_symbol1456 + 792
    frame #11: 0x0000000823a984d2 libplacebo.so.349`pl_pass_run + 2082
    frame #12: 0x0000000823a9c9d9 libplacebo.so.349`___lldb_unnamed_symbol1215 + 1145
    frame #13: 0x0000000823a984d2 libplacebo.so.349`pl_pass_run + 2082
    frame #14: 0x000000000044012c mpv`renderpass_run_pl(ra=0x00003a6d5e620d10, params=0x000000086dcc0488) at ra_pl.c:580:5
    frame #15: 0x0000000000411609 mpv`gl_sc_dispatch_draw(sc=0x00003a6d5e799a50, target=0x00003a6d5e605830, discard=true, vao=0x00003a6d5e9508d0, vao_len=3, vertex_stride=24, vertices=0x00003a6d5e64f550, num_vertices=6) at shader_cache.c:1020:5
    frame #16: 0x000000000041e6fd mpv`render_pass_quad(p=0x00003a6d639c68d0, fbo=0x000000086dcc0f30, discard=true, dst=0x000000086dcc0f08) at video.c:1339:12
    frame #17: 0x000000000041dcf0 mpv`finish_pass_fbo(p=0x00003a6d639c68d0, fbo=0x000000086dcc0f30, discard=true, dst=0x000000086dcc0f08) at video.c:1347:32
    frame #18: 0x000000000041d82f mpv`finish_pass_tex(p=0x00003a6d639c68d0, dst_tex=0x00003a6d639c7048, w=3840, h=2160) at video.c:1385:9
    frame #19: 0x0000000000421583 mpv`pass_scale_main(p=0x00003a6d639c68d0) at video.c:2597:5
    frame #20: 0x000000000041921e mpv`pass_render_frame(p=0x00003a6d639c68d0, mpi=0x00003a6d5e963fd0, id=23, flags=11) at video.c:3106:5
    frame #21: 0x0000000000417d84 mpv`gl_video_render_frame(p=0x00003a6d639c68d0, frame=0x00003a6d5e950010, fbo=0x000000086dcc1db0, flags=11) at video.c:3452:22
    frame #22: 0x0000000000430b61 mpv`draw_frame(vo=0x00003a6d5aa5a650, frame=0x00003a6d5e950010) at vo_gpu.c:82:5
    frame #23: 0x000000000042f8fa mpv`do_redraw(vo=0x00003a6d5aa5a650) at vo.c:1104:5
    frame #24: 0x000000000042eaf0 mpv`vo_thread(ptr=0x00003a6d5aa5a650) at vo.c:1187:13
    frame #25: 0x0000000837621b52 libthr.so.3`___lldb_unnamed_symbol565 + 306

After the issue

  thread #13, name = 'vo'
    frame #0: 0x00000008386e745a libc.so.7`__sys_ppoll + 10
    frame #1: 0x000000083762dc1c libthr.so.3`___lldb_unnamed_symbol735 + 60
    frame #2: 0x000000000044f2ea mpv`mp_poll(fds=0x000000086dcc1eb0, nfds=2, timeout_ns=10000000000) at poll_wrapper.c:36:12
    frame #3: 0x000000000047c7da mpv`vo_x11_wait_events(vo=0x00003a6d5aa5a650, until_time_ns=1677287336308) at x11_common.c:2282:5
    frame #4: 0x000000000048f720 mpv`xlib_wait_events(ctx=0x00003a6d5e620050, until_time_ns=1677287336308) at context_xlib.c:131:5
    frame #5: 0x0000000000430d98 mpv`wait_events(vo=0x00003a6d5aa5a650, until_time_ns=1677287336308) at vo_gpu.c:265:9
    frame #6: 0x000000000042f9af mpv`wait_vo(vo=0x00003a6d5aa5a650, until_time=1677287336308) at vo.c:729:9
    frame #7: 0x000000000042eb75 mpv`vo_thread(ptr=0x00003a6d5aa5a650) at vo.c:1207:9
    frame #8: 0x0000000837621b52 libthr.so.3`___lldb_unnamed_symbol565 + 306

Vulkan threads stayed with the same stack frames.

I think you should report the tests on Making sure you're not a bot!, At the bottom there’s a couple of tunables that might be useful for doing further testing.