I'm failing trying to configure the framebuffer as default graphic adapter and the nvidia geforce RTX 2080 ti as PRIME render offload

Hello to everyone.

what Im trying to do is to set the framebuffer video adapter as primary graphic card on my bhyve-ubuntu vm instead of the nvidia RTX 2080 ti card that I have passed through. What I want to do really is to use both the graphic adapters,but the primary should be the framebuffer and the secondary the nvidia.I suspect that I need to use the intel graphic adapter to apply the “NVIDIA’s PRIME render offload” configuration,which I think is what I need actually. The problem is that at the moment I can’t use two monitors. So,my goal is the same goal explained on the official nvidia website :

https://download.nvidia.com/XFree86/Linux-x86_64/435.17/README/primerenderoffload.html

where we can read :

PRIME render offload is the ability to have an X screen rendered by one GPU, but choose certain applications within that X screen to be rendered on a different GPU.
This is particularly useful in combination with dynamic power management to leave an NVIDIA GPU powered off, except when it is needed to render select performance-sensitive applications.“To use NVIDIA’s PRIME render offload support, configure the X server with an X screen using an integrated GPU with the xf86-video-modesetting X driver”

if I don’t get wrong,if I can accomplish that,I can use one only monitor,where I will see Linux loaded as a bhyve VM inside a window /smaller than the size of my screen/ and at the same time I will use my RTX 2080 ti for experimenting with “stable diffusion” without using another monitor. That’s because it needs a powerful graphic card to work.

What I tried to do right now ? I tried to apply the Xorg configuration explained in the nvidia web site,but what I’ve got is that Xorg failed to display some errors.
So,the controller that you see below should be used as primary inside the ubuntu vm :

-s 6,fbuf,tcp=0.0.0.0:5919,w=1600,h=950,wait \

while the ones you see below as secondary :


02:00.0 VGA compatible controller: NVIDIA Corporation TU102 [GeForce RTX 2080 Ti] (rev a1)
02:00.1 Audio device: NVIDIA Corporation TU102 High Definition Audio Controller (rev a1)
02:00.2 USB controller: NVIDIA Corporation TU102 USB 3.1 Host Controller (rev a1)
02:00.3 Serial bus controller: NVIDIA Corporation TU102 USB Type-C UCSI Controller (rev a1)

The script that I use to launch the vm is the following :

bhyve -S -c sockets=1,cores=2,threads=2 -m 4G -w -H -A \
-s 0,hostbridge \
-s 2,virtio-blk,/mnt/$vmdisk1'p2'/bhyve/img/Linux/ubuntu2210.img,bootindex=1 \
-s 3,virtio-blk,/dev/$vmdisk4 \
-s 4,virtio-blk,/dev/$vmdisk2 \
-s 6,fbuf,tcp=0.0.0.0:5919,w=1600,h=950,wait \
-s 8:0,passthru,2/0/0 \
-s 8:1,passthru,2/0/1 \
-s 8:2,passthru,2/0/2 \
-s 8:3,passthru,2/0/3 \
-s 10,virtio-net,tap19 \
-s 11,virtio-9p,sharename=/ \
-s 30,xhci,tablet \
-s 31,lpc \
-l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI_CODE.fd \
vm0:19 < /dev/null & sleep 2 && vncviewer 0:19

On /boot/loader.conf I’ve added :

pptdevs="2/0/0 2/0/1 2/0/2 2/0/3"

Inside the ubuntu guest os,the passed through graphic adapters sounds like these ones :

00:06.0 VGA compatible controller: Device fb5d:40fb
00:08.0 VGA compatible controller: NVIDIA Corporation TU102 [GeForce RTX 2080 Ti] (rev a1)
00:08.1 Audio device: NVIDIA Corporation TU102 High Definition Audio Controller (rev a1)
00:08.2 USB controller: NVIDIA Corporation TU102 USB 3.1 Host Controller (rev a1)
08:00.3 Serial bus controller: NVIDIA Corporation TU102 USB Type-C UCSI Controller (rev a1)

As I said before,I tried theo xorg conf file suggested by the nvidia website to achieve the goal :

Section "ServerLayout"
  Identifier "layout"
  Screen 0 "iGPU"
EndSection

Section "Device"
  Identifier "iGPU"
  Driver "modesetting"
  BusID    "PCI:0:6:0
EndSection

Section "Screen"
  Identifier "iGPU"
  Device "iGPU"
EndSection

Section "ServerLayout"
  Identifier "layout"
  Option "AllowNVIDIAGPUScreens"
EndSection

but it didn’t work. This is the log file that shows the errors reported :

Anyway,there is something that works as expected,according with the nvidia website :

xrandr --listproviders

Providers: number : 1
Provider 0: id: 0x1b7 cap: 0x0 crtcs: 4 outputs: 8 associated providers: 0 name:NVIDIA-0

and :

nvidia-smi

Tue Dec  6 16:34:35 2022       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.86.01    Driver Version: 515.86.01    CUDA Version: 11.7     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:00:08.0 Off |                  N/A |
| 29%   26C    P8    20W / 250W |      1MiB / 11264MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
lsmod | grep nvidia-drm
nothing

dmesg | grep nvidia-drm

[    2.927164] [drm] [nvidia-drm] [GPU ID 0x00000008] Loading driver
[    4.743168] [drm] Initialized nvidia-drm 0.0.0 20160202 for 0000:00:08.0 on minor 0

According to the logs it looks like it selects the framebuffer device as primary graphics in the first place:

(--) PCI:*(0@0:6:0) ...
(--) PCI: (0@0:8:0) ...

it means that my problem is not related to the hypervisor configuration anymore,but it depends about how is configured the guest / ubuntu os. So I’m sure that this question
belongs to a support place like ubuntu and / or nvidia forum. More nvidia than ubuntu.

You can find the dmsg messages here : Ubuntu Pastebin
Instead here you can give a look at the xorg log file : Ubuntu Pastebin

The nvidia website says also :

Also, confirm that the xf86-video-modesetting X driver is using “glamoregl”. The log file /var/log/Xorg.0.log should contain something like this…

but I don’t see the word “glamoregl” between the messages located on the file /var/log/Xorg.0.log

If I use ONLY the framebuffer argument,it works : the desktop manager is loaded within the vm window,but if between the bhyve parameters I declare the framebuffer AND the nvidia slots,on the vm window I see the blinking pointer with this error : nvidiafb: unknown NV_ARCH and the physical monitor is turned off.

Is the framebuffer used by bhyve / this : *fbuf,tcp=0.0.0.0:5919,w=1600,h=950,wait * / considered as a “integrated GPU with the xf86-video-modesetting X driver” ?

The kernel drm driver used for the virtual vga needs to have PRIME support for this to work. So first of all it needs a drm driver, this doesn’t work on simple frambuffers, fbdev.
Different but similar to those added recently to the Aspeed server graphics
https://www.phoronix.com/news/AST-DMA-BUF-PRIME-Sharing

Will it work if I pass thru the intel graphic card that I have in my system,instead of the fbdev framebuffer ? I mean this :

00:02.0 Display controller: Intel Corporation CoffeeLake-S GT2 [UHD Graphics 630] (rev 02)

at the moment It is not passed through correctly,so I tried with the fbdev. Or…can I apply the patch that I’ve found on the phoronix website ? will it work or is it related to the Aspeed graphic only ?

The patch is strictly only working with the aspeed graphics as it’s a real device with dedicated video memory. It was merely meant as example if already a drm driver exists, extending it with prime functions is not too complex. At least to have some output.
Passing through also the intel igpu should work, though I don’t know of anyone ever tried.

What happens If,instead of passing the intel gpu,I want to pass the gtx 1060 (choosing the intel gpu as first and default on the BIOS instead of the 1060),and I use it in modesetting and the 2080 ti as a PRIME render offload ? is the configuration that I’ve posted correct ? Will it work as is or I should modify it in some way ?

For nvidia2nvidia PRIME, you don’t need the modesetting driver, just driver 470+, see
https://forums.developer.nvidia.com/t/ubuntu-20-04-not-able-to-run-6-7-monitor-setup-with-one-x-screen/200904/2?u=generix
though in that case, pci2pci transfers are involved instead of pci2sysmem in the intel case so iommu device isolation (needed for the passthrough on the host) might interfere.

which xorg.conf file should I use in this case ?

If you disable the virtual graphics, none.

can you elaborate more ? what do you mean ? Should I use the BUS id value on the modesetting field ?

If there are only the two nvidia gpus in the vm and no virtual graphics adapter, no xorg.conf is needed. Xorg autoconfig.

ok. anyway I’m more interested to pass correctly the intel gpu. Asap I will try to do that. Take in consideration that it works already for a lot of users. I don’t know why it doesn’t for me. I’m in contact with the developer who created the patches to enable it. I should install freebsd from scratch and see if it will work.

I’m trying to understand why, because an error,my intel graphic card is not accepted by any bhyve / vm based on Linux,but it works if the OS is Windows. The error given is :

bhyve: Warning: Unable to reuse host address of Graphics Stolen Memory. GPU passthrough might not work properly.
bhyve: gvt_d_setup_opregion: Unable to get OpRegion base and length
bhyve: gvt_d_init: Unable to setup OpRegion
device emulation initialization error: Operation not supported by device

A bhyve developer says : “Seems like your booting your Intel GPU in Legacy Mode. Go into your BIOS and disable CSM”. Really I don’t know what this means. I never know that a “GPU can boot” and I never know that a “GPU has a CSM mode”. I went on my BIOS and I have disabled the CSM,but it is not related to GPUs,but to USB disks. Do you know what he’s talking about ? Can you explain to me what should I do to “disable CSM” for my Intel graphic card ? Even the nvidia cards have a CSM mode that can be disabled ? How ? thanks.

As you know, gpus have a video bios, the vbios, which gets started by the system bios (sbios, uefi firmware), booting the gpu. Those vbioses have two parts, the old vga bios and the new efi framebuffer. Depending on what is set in the system bios (csm en-/disabled), either the vga bios or the efi framebuffer part is run on the vbios, “booted”.

On the BIOS of my PC I don’t have a CSM entry related to the GPU. If I remember correctly the firmware of my Nvidia graphic cards can be configured as BIOS or UEFI using an external tool that I don’t remember the name. But I don’t know if it works even for my Intel GPU. In addition,why the intel gpu is working well when passed to a Windows VM ? If the GPU is not configured well,it should not work even with a Windows VM…

With which tool I can change the mode of my intel Gpu ?

There is no gpu specific csm setting, there is only one, “the” csm setting, the gpu will follow it.
There is no tool to switch, only that csm setting in bios. If you already disabled it, it’s something else that keeps it from working.
The “tool” you thought of regarding nvidia gpu switching to efi was flashing a new vbios containing also an efi gop on very old cards that were sold with only a vga bios.

at this point I need to ask to “gigabyte”,to know if my bios has the proper csm setting and if the setting that I have is good,since Im not sure. It says that it allows to boot the usb disks from the old bios to uefi and viceversa. We are talking about this setting ?

This log can grab your interest,I think. It explains the reasons why my Intel IGPU is assigned to a Linux guest OS,but it does not work. I know that you are an Nvidia developer,but I’m sure that you understand what could be wrong,more or less.

[ 2.970719] i915 0000:00:07.0: [drm] VT-d active for gfx access
[ 2.970874] Console: switching to colour dummy device 80x25
[ 2.970945] i915 0000:00:07.0: [drm] Transparent Hugepage mode 'huge=within_size'
[ 2.971316] i915 0000:00:07.0: BAR 6: can't assign [??? 0x00000000 flags 0x20000000] (bogus alignment)
[ 2.971319] i915 0000:00:07.0: [drm] Failed to find VBIOS tables (VBT)
[ 3.060611] i915 0000:00:07.0: [drm] Finished loading DMC firmware i915/kbl_dmc_ver1_04.bin (v1.4)
[ 3.060868] [drm] [nvidia-drm] [GPU ID 0x00000008] Loading driver
[ 3.060819] snd_hda_intel 0000:00:08.1: bound 0000:00:07.0 (ops i915_hdcp_component_ops [i915])
[ 3.060948] ------------[ cut here ]------------
[ 3.060949] WARNING: CPU: 3 PID: 307 at sound/hda/hdac_component.c:196 hdac_component_master_bind+0x9a/0x110 [snd_hda_core]
[ 3.060957] Modules linked in: nls_iso8859_1 nvidia_drm(PO+) nvidia_modeset(PO) chromeos_pstore(-) i915(+) nvidia(PO) drm_buddy ttm
snd_hda_codec_hdmi snd_hda_intel intel_rapl_msr intel_rapl_common snd_intel_dspcfg drm_display_helper snd_intel_sdw_acpi snd_hda_codec
cec snd_usb_audio snd_hda_core crct10dif_pclmul snd_usbmidi_lib rc_core ghash_clmulni_intel snd_hwdep aesni_intel crypto_simd cryptd dr
m_kms_helper snd_seq_midi rapl joydev input_leds snd_pcm snd_seq_midi_event fb_sys_fops nvidiafb syscopyarea vgastate ucsi_ccg(+) 9pnet
_virtio sysfillrect fb_ddc typec_ucsi sysimgblt typec video mac_hid 9pnet i2c_algo_bit snd_rawmidi serio_raw snd_seq snd_seq_device snd
_timer snd hid_cmedia soundcore v4l2loopback(O) videodev mc msr parport_pc ppdev lp ramoops pstore_blk binfmt_misc drm parport reed_sol
omon pstore_zone efi_pstore qemu_fw_cfg ip_tables x_tables autofs4 hid_generic usbhid hid virtio_net net_failover failover i2c_nvidia_g
pu crc32_pclmul xhci_pci psmouse xhci_pci_renesas
[ 3.060990] i2c_ccgx_ucsi virtio_blk
[ 3.060992] CPU: 3 PID: 307 Comm: systemd-udevd Tainted: P O 5.19.0-26-generic #27-Ubuntu
[ 3.060993] Hardware name: FreeBSD BHYVE/BHYVE, BIOS 13.0 11/10/2020
[ 3.060994] RIP: 0010:hdac_component_master_bind+0x9a/0x110 [snd_hda_core]
[ 3.061026] Code: ef e8 0a 37 44 d5 85 c0 78 79 48 8d 7b 18 e8 cd 24 38 d4 31 c0 48 83 c4 08 5b 41 5d 5d 31 d2 31 c9 31 f6 31 ff c3
cc cc cc cc <0f> 0b b8 ea ff ff ff 48 89 de 4c 89 ef 89 45 ec e8 91 4a bb d4 48
[ 3.061035] RSP: 0018:ffffaaed8034f818 EFLAGS: 00010246
[ 3.061049] RAX: 0000000000000000 RBX: ffff97bf51d3fa48 RCX: 0000000000000000
[ 3.061070] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
[ 3.061071] RBP: ffffaaed8034f830 R08: 0000000000000000 R09: 0000000000000000
[ 3.061072] R10: 0000000000000000 R11: 0000000000000000 R12: ffff97bf4b649700
[ 3.061072] R13: ffff97bf40c540d0 R14: 0000000000000002 R15: ffff97bf4fdfa2f8
[ 3.061073] FS: 00007f8128a058c0(0000) GS:ffff97bf7bd80000(0000) knlGS:0000000000000000
[ 3.061074] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 3.061075] CR2: 00005555fc5f4a40 CR3: 00000001022f2002 CR4: 00000000003706e0
[ 3.061076] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 3.061077] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 3.061078] Call Trace:
[ 3.061079] <TASK>
[ 3.061081] try_to_bring_up_aggregate_device+0x87/0x120
[ 3.061084] __component_add+0xba/0x1a0
[ 3.061086] component_add_typed+0x12/0x30
[ 3.061088] intel_hdcp_component_init+0x75/0x110 [i915]
[ 3.061201] intel_modeset_init_nogem+0x17f/0x340 [i915]
[ 3.061275] i915_driver_probe+0x1d4/0x490 [i915]
[ 3.061332] ? drm_privacy_screen_get+0x16d/0x190 [drm]
[ 3.061357] ? acpi_dev_found+0x64/0x80
[ 3.061360] i915_pci_probe+0x56/0x150 [i915]
[ 3.061415] local_pci_probe+0x47/0x90
[ 3.061418] pci_call_probe+0x55/0x190
[ 3.061419] pci_device_probe+0x84/0x120
[ 3.061423] really_probe+0x1df/0x3b0
[ 3.061424] __driver_probe_device+0x12c/0x1b0
[ 3.061426] driver_probe_device+0x24/0xd0
[ 3.061427] __driver_attach+0xe0/0x210
[ 3.061429] ? __device_attach_driver+0x130/0x130
[ 3.061430] bus_for_each_dev+0x90/0xe0
[ 3.061434] driver_attach+0x1e/0x30
[ 3.061435] bus_add_driver+0x187/0x230
[ 3.061436] driver_register+0x8f/0x100
[ 3.061438] __pci_register_driver+0x62/0x70
[ 3.061440] i915_pci_register_driver+0x23/0x30 [i915]
[ 3.061501] i915_init+0x3e/0xf2 [i915]
[ 3.061562] ? 0xffffffffc32a1000
[ 3.061564] do_one_initcall+0x5e/0x240
[ 3.061566] do_init_module+0x50/0x210
[ 3.061569] load_module+0xb7d/0xcd0
[ 3.061571] __do_sys_finit_module+0xc4/0x140
[ 3.061572] ? __do_sys_finit_module+0xc4/0x140
[ 3.061574] __x64_sys_finit_module+0x18/0x30
[ 3.061575] do_syscall_64+0x5b/0x90
[ 3.061577] ? __x64_sys_mmap+0x33/0x70
[ 3.061578] ? do_syscall_64+0x67/0x90
[ 3.061579] ? ext4_llseek+0x60/0x120
[ 3.061581] ? ksys_lseek+0x92/0xe0
[ 3.061583] ? exit_to_user_mode_prepare+0x30/0xb0
[ 3.061585] ? syscall_exit_to_user_mode+0x26/0x50
[ 3.061587] ? __x64_sys_lseek+0x18/0x30
[ 3.061588] ? do_syscall_64+0x67/0x90
[ 3.061589] entry_SYSCALL_64_after_hwframe+0x63/0xcd
[ 3.061591] RIP: 0033:0x7f8128916c4d
[ 3.061593] Code: 5d c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c
24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 83 f1 0d 00 f7 d8 64 89 01 48
[ 3.061594] RSP: 002b:00007ffe4526c108 EFLAGS: 00000246 ORIG_RAX: 0000000000000139
[ 3.061596] RAX: ffffffffffffffda RBX: 00005555fc6f17b0 RCX: 00007f8128916c4d
[ 3.061596] RDX: 0000000000000000 RSI: 00007f8128ac8458 RDI: 0000000000000019
[ 3.061597] RBP: 00007f8128ac8458 R08: 0000000000000000 R09: 00007ffe4526c230
[ 3.061598] R10: 0000000000000019 R11: 0000000000000246 R12: 0000000000020000
[ 3.061598] R13: 00005555fc60a580 R14: 0000000000000000 R15: 00005555fc6f4020
[ 3.061600] </TASK>
[ 3.061600] ---[ end trace 0000000000000000 ]---
[ 3.061608] snd_hda_intel 0000:00:08.1: adev bind failed: -22
[ 3.366340] nvidia-gpu 0000:00:08.3: i2c timeout error e0000000
[ 3.366345] ucsi_ccg 0-0008: i2c_transfer failed -110
[ 3.366347] ucsi_ccg 0-0008: ucsi_ccg_init failed - -110
[ 3.366349] ucsi_ccg: probe of 0-0008 failed with error -110
[ 3.426012] tsc: Refined TSC clocksource calibration: 3597.416 MHz
[ 3.426024] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x33daca713ae, max_idle_ns: 440795269098 ns
[ 3.459296] clocksource: Switched to clocksource tsc
[ 3.548873] loop0: detected capacity change from 0 to 8
[ 3.549210] Dev loop0: unable to read RDB block 8
[ 3.549215] loop0: unable to read partition table
[ 3.549218] loop0: partition table beyond EOD, truncated
[ 3.707526] i915 0000:00:07.0: [drm] failed to retrieve link info, disabling eDP
[ 3.707815] i915 0000:00:07.0: [drm] [ENCODER:94:DDI B/PHY B] is disabled/in DSI mode with an ungated DDI clock, gate it
[ 3.707821] i915 0000:00:07.0: [drm] [ENCODER:111:DDI C/PHY C] is disabled/in DSI mode with an ungated DDI clock, gate it
[ 3.707824] i915 0000:00:07.0: [drm] [ENCODER:121:DDI D/PHY D] is disabled/in DSI mode with an ungated DDI clock, gate it
[ 4.194609] process '/usr/bin/anydesk' started with executable stack
[ 4.401745] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if yo
u need this.
[ 4.854484] [drm] Initialized nvidia-drm 0.0.0 20160202 for 0000:00:08.0 on minor 1
[ 4.856763] [drm] Initialized i915 1.6.0 20201103 for 0000:00:07.0 on minor 0
[ 4.861400] ------------[ cut here ]------------
[ 4.861405] i915 0000:00:07.0: drm_WARN_ON(acomp->base.ops || acomp->[base.dev](http://base.dev))
[ 4.861429] WARNING: CPU: 2 PID: 307 at drivers/gpu/drm/i915/display/intel_audio.c:1261 i915_audio_component_bind+0x4b/0x130 [i915]
[ 4.861554] Modules linked in: xt_CHECKSUM xt_MASQUERADE xt_conntrack ipt_REJECT nf_reject_ipv4 xt_tcpudp nft_compat nft_chain_nat n
f_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nf_tables libcrc32c nfnetlink bridge stp llc overlay nls_iso8859_1 nvidia_drm(PO) nvid
ia_modeset(PO) i915(+) nvidia(PO) drm_buddy ttm snd_hda_codec_hdmi snd_hda_intel intel_rapl_msr intel_rapl_common snd_intel_dspcfg drm_
display_helper snd_intel_sdw_acpi snd_hda_codec cec snd_usb_audio snd_hda_core crct10dif_pclmul snd_usbmidi_lib rc_core ghash_clmulni_i
ntel snd_hwdep aesni_intel crypto_simd cryptd drm_kms_helper snd_seq_midi rapl joydev input_leds snd_pcm snd_seq_midi_event fb_sys_fops
nvidiafb syscopyarea vgastate ucsi_ccg 9pnet_virtio sysfillrect fb_ddc typec_ucsi sysimgblt typec video mac_hid 9pnet i2c_algo_bit snd
_rawmidi serio_raw snd_seq snd_seq_device snd_timer snd hid_cmedia soundcore v4l2loopback(O) videodev mc msr parport_pc ppdev lp ramoop
s pstore_blk binfmt_misc drm parport reed_solomon
[ 4.861595] pstore_zone efi_pstore qemu_fw_cfg ip_tables x_tables autofs4 hid_generic usbhid hid virtio_net net_failover failover i
2c_nvidia_gpu crc32_pclmul xhci_pci psmouse xhci_pci_renesas i2c_ccgx_ucsi virtio_blk
[ 4.861605] CPU: 2 PID: 307 Comm: systemd-udevd Tainted: P W O 5.19.0-26-generic #27-Ubuntu
[ 4.861607] Hardware name: FreeBSD BHYVE/BHYVE, BIOS 13.0 11/10/2020
[ 4.861607] RIP: 0010:i915_audio_component_bind+0x4b/0x130 [i915]
[ 4.861682] Code: 8b 5f 50 48 85 db 0f 84 e8 00 00 00 e8 5e bf 10 d2 48 c7 c1 f8 94 1c c3 48 89 da 48 c7 c7 7a 19 1b c3 48 89 c6 e8
8a 89 5e d2 <0f> 0b b8 ef ff ff ff 5b 41 5c 41 5d 5d 31 d2 31 c9 31 f6 31 ff c3
[ 4.861683] RSP: 0018:ffffaaed8034f7b0 EFLAGS: 00010246
[ 4.861685] RAX: 0000000000000000 RBX: ffff97bf40a91990 RCX: 0000000000000000
[ 4.861686] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
[ 4.861686] RBP: ffffaaed8034f7c8 R08: 0000000000000000 R09: 0000000000000000
[ 4.861687] R10: 0000000000000000 R11: 0000000000000000 R12: ffff97bf40c460d0
[ 4.861688] R13: ffff97bf4fdf8000 R14: ffff97bf51d3fa48 R15: ffff97bf4ba5f340
[ 4.861689] FS: 00007f8128a058c0(0000) GS:ffff97bf7bd00000(0000) knlGS:0000000000000000
[ 4.861690] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 4.861694] CR2: 00005555fc6f6228 CR3: 00000001022f2004 CR4: 00000000003706e0
[ 4.861695] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 4.861695] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 4.861696] Call Trace:
[ 4.861697] <TASK>
[ 4.861700] component_bind+0x63/0x120
[ 4.861705] component_bind_all+0xae/0x140
[ 4.861707] hdac_component_master_bind+0x3a/0x110 [snd_hda_core]
[ 4.861715] try_to_bring_up_aggregate_device+0x87/0x120
[ 4.861716] __component_add+0xba/0x1a0
[ 4.861718] component_add_typed+0x12/0x30
[ 4.861719] intel_audio_init+0x43/0xf0 [i915]
[ 4.861795] intel_display_driver_register+0x39/0x60 [i915]
[ 4.861867] i915_driver_probe+0x25b/0x490 [i915]
[ 4.861921] ? drm_privacy_screen_get+0x16d/0x190 [drm]
[ 4.861954] ? acpi_dev_found+0x64/0x80
[ 4.861958] i915_pci_probe+0x56/0x150 [i915]
[ 4.862081] local_pci_probe+0x47/0x90
[ 4.862085] pci_call_probe+0x55/0x190
[ 4.862087] pci_device_probe+0x84/0x120
[ 4.862088] really_probe+0x1df/0x3b0
[ 4.862091] __driver_probe_device+0x12c/0x1b0
[ 4.862092] driver_probe_device+0x24/0xd0
[ 4.862093] __driver_attach+0xe0/0x210
[ 4.862095] ? __device_attach_driver+0x130/0x130
[ 4.862096] bus_for_each_dev+0x90/0xe0
[ 4.862098] driver_attach+0x1e/0x30
[ 4.862099] bus_add_driver+0x187/0x230
[ 4.862100] driver_register+0x8f/0x100
[ 4.862102] __pci_register_driver+0x62/0x70
[ 4.862104] i915_pci_register_driver+0x23/0x30 [i915]
[ 4.862164] i915_init+0x3e/0xf2 [i915]
[ 4.862223] ? 0xffffffffc32a1000
[ 4.862225] do_one_initcall+0x5e/0x240
[ 4.862228] do_init_module+0x50/0x210
[ 4.862231] load_module+0xb7d/0xcd0
[ 4.862233] __do_sys_finit_module+0xc4/0x140
[ 4.862234] ? __do_sys_finit_module+0xc4/0x140
[ 4.862236] __x64_sys_finit_module+0x18/0x30
[ 4.862237] do_syscall_64+0x5b/0x90
[ 4.862239] ? __x64_sys_mmap+0x33/0x70
[ 4.862240] ? do_syscall_64+0x67/0x90
[ 4.862241] ? ext4_llseek+0x60/0x120
[ 4.862244] ? ksys_lseek+0x92/0xe0
[ 4.862246] ? exit_to_user_mode_prepare+0x30/0xb0
[ 4.862248] ? syscall_exit_to_user_mode+0x26/0x50
[ 4.862250] ? __x64_sys_lseek+0x18/0x30
[ 4.862251] ? do_syscall_64+0x67/0x90
[ 4.862252] entry_SYSCALL_64_after_hwframe+0x63/0xcd
[ 4.862255] RIP: 0033:0x7f8128916c4d
[ 4.862256] Code: 5d c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c
24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 83 f1 0d 00 f7 d8 64 89 01 48
[ 4.862257] RSP: 002b:00007ffe4526c108 EFLAGS: 00000246 ORIG_RAX: 0000000000000139
[ 4.862259] RAX: ffffffffffffffda RBX: 00005555fc6f17b0 RCX: 00007f8128916c4d
[ 4.862260] RDX: 0000000000000000 RSI: 00007f8128ac8458 RDI: 0000000000000019
[ 4.862261] RBP: 00007f8128ac8458 R08: 0000000000000000 R09: 00007ffe4526c230
[ 4.862262] R10: 0000000000000019 R11: 0000000000000246 R12: 0000000000020000
[ 4.862262] R13: 00005555fc60a580 R14: 0000000000000000 R15: 00005555fc6f4020
[ 4.862264] </TASK>
[ 4.862264] ---[ end trace 0000000000000000 ]---
[ 4.862269] snd_hda_intel 0000:00:08.1: failed to bind 0000:00:07.0 (ops i915_audio_component_bind_ops [i915]): -17
[ 4.862344] snd_hda_intel 0000:00:08.1: adev bind failed: -17
[ 4.862345] i915 0000:00:07.0: [drm] *ERROR* failed to add audio component (-17)

That simply means that the vbios is missing which is in the rom bar (BAR 6).
From some other bios related intel bugs I suspect the i915 driver relies on it while the Windows driver doesn’t.

Interesting. is there a vbios that I can use ? Until some time ago,I used a vbios extracted from the nvidia gpu to pass it to a vm using this kind of method :

-s 8:0,passthru,2/0/0,rom=TU102.rom \
-s 8:1,passthru,2/0/1 \
-s 8:2,passthru,2/0/2 \
-s 8:3,passthru,2/0/3 \

maybe I can try something like this :

-s 7:0,passthru,0/2/0,rom=UHD-Graphics-630-vbios.rom

is there a method to extract the vbios from my intel gpu ? very thanks. your help is always valuable.