I'm failing trying to configure the framebuffer as default graphic adapter and the nvidia geforce RTX 2080 ti as PRIME render offload

When running linux on bare metal, the vbios is exposed in /sys/bus/pci/devices/0000:00:02.0/rom
Don’t know whether bsd has the same facility.

I found this thread :

that looks promising,offering two approaches to try :

  1. GitHub - patmagauran/i915ovmfPkg: VBIOS for Intel GPU Passthrough
  2. GitHub - google/pawn: Extract BIOS firmware from Intel-based workstations and laptops

Anyway,I’m not sure that they will work. The first one could damage my gpu,the second one seems to don’t support my igpu specific model…

what you suggest to me to do,according with that thread ?

I don’t think that will work for Linux. A generic vbios doesn’t have the device specific VBT which the i915 driver is looking for. Worth a try, nevertheless.

What I don’t still understand is why,a lot of intel igvt cards works with the patches developed for bhyve,but mine doesn’t. Yes,I’ve asked to the bhyve developer that I’m in contact with,but he never replied satisfactorily. You can read here what the developers said to this argument :

https://reviews.freebsd.org/D26209?id=76277

you will realize that they say that it should work,for most platforms. But for mine ?

They say to use a gop.rom.

-s 2,passthru,0/2/0[,rom=<path/to/gop.rom>] \

where can I find it ?

They also say :

Running a VM without GOP driver

Install your VM on the "old" way with GVT-d disabled

does it means that I should disable the GVT-d gpu from my BIOS before to do everything else?

The “EFI GOP” is the efi equivalent of the “VGA Bios” in legacy bios boot, both are contained in the vbios. They just awkwardly tell to attach a vbios file.

Put a linux on an usb boot stick, and extract it from the sysfs node I told you.

I suspect this is for Windows only, doesn’t really make sense to me.

How should I know, this is the linux nvidia forum, not the intel on bsd/bhyve one? I have no knowledge of that. AFAIK, with Linux/KVM you can simply pass through the rom bar.

Just to make sure, where are your bios messages displayed, on the intel gpu or the nvidia gpu?

What do you mean ? when I try to passthru the intel gpu,I start bhyve with this script :


bhyve -S -c sockets=1,cores=2,threads=2 -m 4G -w -H -A \
-s 0,hostbridge \
-s 2,virtio-blk,/mnt/$vmdisk1'p2'/bhyve/img/Linux/ubuntu2210.img,bootindex=1 \
-s 6,fbuf,tcp=0.0.0.0:5919,w=1600,h=950,wait \
-s 7,passthru,0/2/0 \
-s 10,virtio-net,tap19 \
-s 11,virtio-9p,sharename=/ \
-s 30,xhci,tablet \
-s 31,lpc \
-l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI_CODE.fd \
vm0:19 < /dev/null & sleep 2 && vncviewer 0:19

as you can see there isn’t any nvidia graphic card,but the framebuffer and the intel gpu passed thru at address 0/2/0…; on the BIOS I put the nvidia 1060 - 1/0/0 - ; the intel gpu is enabled but the PC does not boot using it…and the rtx 2080 ti has address 2/0/0. On the /boot/loader.conf I have declared :

pptdevs="0/2/0 2/0/0 2/0/1 2/0/2 2/0/3"

According with this little piece of log :


[    2.744008] i915 0000:00:08.0: BAR 6: can't assign [??? 0x00000000 flags 0x20000000] (bogus alignment)
[    2.744012] i915 0000:00:08.0: [drm] Failed to find VBIOS tables (VBT)
[    2.745788] i915 0000:00:08.0: [drm] Finished loading DMC firmware i915/kbl_dmc_ver1_04.bin (v1.4)
[    3.390409] i915 0000:00:08.0: [drm] failed to retrieve link info, disabling eDP
[    3.418329] process '/usr/bin/anydesk' started with executable stack
---> [    3.423175] [drm] Initialized i915 1.6.0 20201103 for 0000:00:08.0 on minor 0

doesn’t it seems that the i915 driver is initialized despite the error ?

Yes, it’s loading.

Your opinion seems to be confirmed by the bhyve developer,which said :

Intel GPU passthrough should work on upstream 14.0 and likely on 13.2 for Linux guest.
The i915 driver doesn’t require a VBIOS nor an OpRegion.
Windows requires an OpRegion. That’s whats missing upstream yet and it’s included in my patches.
Btw: adding a ROM bar to a pci devices is supported in upstream 13.2 and 14.0 too.
The issue you’re seeing isn’t related to some missing functionality.

Unfortunately he didn’t explain what could cause my issue.

So,what ? Do you have some kind of suggestion to give to me ?

IDK, what kind of monitor do you have connected to the onboard graphics? What does it display? Does drm_info show it?
I really don’t know what you’re trying to do anyway. Because you don’t want to connect a monitor to the passed-through nvidia gpu, you’re instead additionally passing through an intel gpu, connect a monitor to that and then use prime to make use of the nvidia gpu? Doesn’t sound sane.

Hello. I’m trying right now different configurations,to understand why it does not work. I have attached the HDMI cable from the monitor samsung synchmaster to the Intel GPU hdmi port because I wanna see if the signal can reach the monitor. For this experiment,let’s forget the nvidia gpu and the framebuffer. I have not passed these devices,to avoid any interference.

This is the bhyve script I’m trying :

bhyve -S -c sockets=1,cores=2,threads=2 -m 4G -w -H -A \
-s 0,hostbridge \
-s 1,nvme,/dev/nvd0,bootindex=1 \
-s 2,virtio-blk,/dev/$vmdisk4 \
-s 3,virtio-blk,/dev/$vmdisk8 \
-s 4,virtio-blk,/dev/$vmdisk11 \
-s 5,passthru,0/2/0 \
-s 10,virtio-net,tap2 \
-s 11,virtio-9p,sharename=/ \
-s 12,hda,play=/dev/dsp,rec=/dev/dsp \
-s 30,xhci,tablet \
-s 31,lpc \
-l bootrom,/usr/local/share/uefi-firmware/BHYVE_BHF_CODE.fd \
vm0:2 < /dev/null & sleep 2 && vncviewer 0:2

I tried with two different xorg.conf files in the ubuntu 22.10 guest / vm :

Section "Device"
 Identifier  "Intel Graphics"
 BusID       "PCI:0:5:0"
 Driver      "i915"
EndSection
Section "Device"
 Identifier  "Intel Graphics"
 BusID       "PCI:0:5:0"
 Driver      "intel"
EndSection

These are the log files.

Xorg.0.i915.log : Ubuntu Pastebin
Xorg.0.intel.log : Ubuntu Pastebin

when I use the i915 driver,it says that it can’t find the module and Xorg ends.
when I use the intel driver,Xorg works,but my monitor does not turn on (it’s an old samsung synchmaster)

lspci on freebsd :

00:00.0 Host bridge: Intel Corporation 8th/9th Gen Core 8-core Desktop Processor Host Bridge/DRAM Registers [Coffee Lake S] (rev 0d)
00:01.0 PCI bridge: Intel Corporation 6th-10th Gen Core Processor PCIe Controller (x16) (rev 0d)
00:01.1 PCI bridge: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x8) (rev 0d)
00:02.0 Display controller: Intel Corporation CoffeeLake-S GT2 [UHD Graphics 630] (rev 02)
00:12.0 Signal processing controller: Intel Corporation Cannon Lake PCH Thermal Controller (rev 10)
00:14.0 USB controller: Intel Corporation Cannon Lake PCH USB 3.1 xHCI Host Controller (rev 10)
00:14.2 RAM memory: Intel Corporation Cannon Lake PCH Shared SRAM (rev 10)
00:16.0 Communication controller: Intel Corporation Cannon Lake PCH HECI Controller (rev 10)
00:17.0 SATA controller: Intel Corporation Cannon Lake PCH SATA AHCI Controller (rev 10)
00:1b.0 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #17 (rev f0)
00:1c.0 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #1 (rev f0)
00:1d.0 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #9 (rev f0)
00:1f.0 ISA bridge: Intel Corporation Z390 Chipset LPC/eSPI Controller (rev 10)
00:1f.3 Audio device: Intel Corporation Cannon Lake PCH cAVS (rev 10)
00:1f.4 SMBus: Intel Corporation Cannon Lake PCH SMBus Controller (rev 10)
00:1f.5 Serial bus controller: Intel Corporation Cannon Lake PCH SPI Controller (rev 10)
00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (7) I219-V (rev 10)
01:00.0 VGA compatible controller: NVIDIA Corporation GP106 [GeForce GTX 1060 3GB] (rev a1)
01:00.1 Audio device: NVIDIA Corporation GP106 High Definition Audio Controller (rev a1)
02:00.0 VGA compatible controller: NVIDIA Corporation TU102 [GeForce RTX 2080 Ti] (rev a1)
02:00.1 Audio device: NVIDIA Corporation TU102 High Definition Audio Controller (rev a1)
02:00.2 USB controller: NVIDIA Corporation TU102 USB 3.1 Host Controller (rev a1)
02:00.3 Serial bus controller: NVIDIA Corporation TU102 USB Type-C UCSI Controller (rev a1)
03:00.0 Non-Volatile memory controller: Micron/Crucial Technology Device 5403 (rev 03)

lspci on ubuntu 22.10 :

00:00.0 Host bridge: Network Appliance Corporation Device 1275
00:01.0 Non-Volatile memory controller: Device fb5d:0a0a
00:02.0 SCSI storage controller: Red Hat, Inc. Virtio block device
00:03.0 SCSI storage controller: Red Hat, Inc. Virtio block device
00:04.0 SCSI storage controller: Red Hat, Inc. Virtio block device
00:05.0 Display controller: Intel Corporation CoffeeLake-S GT2 [UHD Graphics 630] (rev 02)
00:0a.0 Ethernet controller: Red Hat, Inc. Virtio network device
00:0b.0 SCSI storage controller: Red Hat, Inc. Virtio filesystem
00:0c.0 Audio device: Intel Corporation NM10/ICH7 Family High Definition Audio Controller
00:1e.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB xHCI Host Controller
00:1f.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]

on ubuntu 22.10 I have disabled the automatic starting of Xorg,to have the maximum control,with the command :

sudo systemctl set-default multi-user.target

on the bhyve parameters I haven’t added the framebuffer but only the intel gpu,so I login inside the ubuntu vm using ssh and from there I run “startx”…

drm_info.log file : Ubuntu Pastebin

Ok man,for the first time I’ve been able to passthru my intel gpu to an ubuntu 22.10 vm and my monitor turned on. It didn’t work because its old. it didn’t turn on neither phisycally because I think that it needs to warm up a little bit,like every old monitor :P ; maybe there were also different problems that prevented it from turning on. So,now that I’m sure that there aren’t bugs neither in the freebsd side nor in the linux side,we can jump to the original question. How to configure xorg so that the intel gpu can act as a framebuffer and the nvidia as PRIME RENDER OFFLOAD ? I ask this because we already ascertained that the configuration below does not work with the simple fbdev :

Section "ServerLayout"
  Identifier "layout"
  Screen 0 "iGPU"
EndSection

Section "Device"
  Identifier "iGPU"
  Driver "modesetting"
  BusID    "PCI:0:6:0
EndSection

Section "Screen"
  Identifier "iGPU"
  Device "iGPU"
EndSection

Section "ServerLayout"
  Identifier "layout"
  Option "AllowNVIDIAGPUScreens"
EndSection

So,I can’t have the fbdev / vnc framebuffer as first and the nvidia gpu as secondary (using one only monitor). Some posts ago you told me that if I use the intel gpu as a framebuffer instead of the fbdev and the nvidia as secondary,it would work. So,how can I achieve this goal ? I want to do this because I can’t use two monitors. My secondary monitor is old and I have placed it in another room. I have one only monitor and inside of it I want to see the ubuntu vm inside a window and I want to use the nvidia gpu for rendering videos and pictures. At this point I guess that if I make the passthru the intel gpu without attaching a monitor to its HDMI port,it will give the error that I have explained above. And maybe even the nvidia gpu will act the same…

Like said, I have no idea what you’re trying to do, what ind of hardware you’re using, I’m always just guessing.
So by all things you tried, I suspect you want to have

  • one monitor connected to the first nvidia gpu
  • bound to the host bsd
  • displaying the host’s graphical desktop
  • running a linux vm
  • with a passed-through second nvidia
  • with its output redirected to the host’s display

Answer: doesn’t work, the virtual graphics drm driver is missing prime functions
You can only use VirtualGL on the vm, GL only. Maybe some vulkan offloading will work, don’t know.

I thought I had already informed about my hardware when I’ve issued the lspci command on FreeBSD…you didn’t understand,ok. I can try to explain better :

I have two monitors right now. The first one is new,AOC 32 inches and I use it for FreeBSD. The second screen is 25 years old,it’s a SAMSUNG SYNCHMASTER,that I don’t want to use. I have 3 GPUS : 1) nvidia geforce GTX 1060 used for FreeBSD and configured as default on the BIOS / my PC boots using it / 2) Intel GPU integrated with the mobo 3) nvidia RTX 2080 ti. I want to use only the AOC monitor to use FreeBSD. When I boot any bhyve / linux Vm I don’t want to use the Samsung monitor. I want to be able to use the desktop UI in Linux within a window with a framebuffer and at the same time I want to render my projects using the RTX 2080 ti in Linux,using the technology PRIME RENDER OFFLOAD,so without attaching any monitor to the VM. I’ve already installed the CUDA toolkit inside the Linux VM and it works well,BUT only if the RTX 2080 ti is attached to a physical monitor. At the moment the configuration that I’m trying to achieve is more comfortable for me than using two monitors. I need the framebuffer because without it,I can’t use Linux at all,since I can’t watch what happens when I move my mouse and keyboard,that’s because I can use one only monitor. The samsung is not working well. I think it’s dying. So I want :

  1. use one monitor connected to the first nvidia gpu (geforce 1060) and this monitor should display what happens on the host and even on the guest os (within the Linux desktop environment) and at the same time I would like to use the 2080 ti for rendering pourpoises

  2. bound the 1060 to the host bsd : yes

  3. displaying the host’s graphical desktop and the graphical desktop of the linux - bhyve virtual machines at the same time,running the latter within a window smaller than the host screen resolution

  4. running a linux vm : yes

  5. with a passed-through second nvidia : yes,with the 2080 ti used for rendering

  6. with its output redirected to the host’s display : yes. I think that this can be done with a framebuffer. I always use the framebuffer created with vnc,but if at the same time I make the pass thru of the rtx 2080 ti I’m forced to use a secondary monitor,but I don’t want to do this.

  7. I’m not sure if the intel gpu can act as a framebuffer instead of the fbdev driver used by vnc or if there is a way to use a framebuffer (fbdev or any other,I don’t care) together with the 2080 ti,WITHOUT using a secondary monitor.

  8. virtualgl can allow me to display what happens inside the linux vm and at the same time can I render my 3D projects using the 2080 ti ? what are the downsides of using virtual gl ? Not sure if it will work under FreeBSD. Because I suspect that virtualgl should run under FreeBSD,at least the server part,is this right ?

You can only use VirtualGL or rather bumblebee. This needs to be installed on the Linux VM. It will spawn a second Xserver on the nvidia gpu (that you won’t see) and then copy the output to the Xserver on the virtual graphics.
Downsides: limited to OpenGL, performance loss.

Thanks man. I don’t like the loss of performances. So,I think I will go to another route,the already discussed route : pass thruing two nvidia gpus and keeping the intel gpu as default gpu for booting the PC and FreeBSD. Now,I would like to be sure that the 1060 can act as a framebuffer and which xorg config should I use. Maybe something like this,will it work ?

Section "ServerLayout"
  Identifier "layout"
  Screen 0 "iGPU"
EndSection

Section "Device"
  Identifier "iGPU"
  Driver "modesetting"
  BusID    "PCI:0:6:0
EndSection

Section "Screen"
  Identifier "iGPU"
  Device "iGPU"
EndSection

Section "ServerLayout"
  Identifier "layout"
  Option "AllowNVIDIAGPUScreens"
EndSection

where BusID “PCI:0:6:0” is for sure the 1060 but with a remapped busID assigned by Linux:

00:06.0 VGA compatible controller: NVIDIA Corporation GP106 [GeForce GTX 1060 3GB] (rev a1)
00:06.1 Audio device: NVIDIA Corporation GP106 High Definition Audio Controller (rev a1)

or ?

I tried to pass thru the framebuffer,the 1060 and the 2080 ti with this bhyve config file :

bhyve -S -c sockets=1,cores=2,threads=2 -m 4G -w -H -A \
-s 0,hostbridge \
-s 1,virtio-blk,/dev/$vmdisk11,bootindex=1 \
-s 2,virtio-blk,/dev/$vmdisk4 \
-s 4,fbuf,tcp=0.0.0.0:5904,w=1600,h=950 \
-s 5:0,passthru,1/0/0 \
-s 8:0,passthru,2/0/0 \
-s 8:1,passthru,2/0/1 \
-s 8:2,passthru,2/0/2 \
-s 8:3,passthru,2/0/3 \
-s 12,virtio-net,tap4 \
-s 13,virtio-9p,sharename=/ \
-s 14,hda,play=/dev/dsp,rec=/dev/dsp \
-s 30,xhci,tablet \
-s 31,lpc \
-l bootrom,/usr/local/share/uefi-firmware/BHYVE_BHF_CODE.fd \
vm0:4 < /dev/null & sleep 2 && vncviewer 0:4

I haven’t used any xorg.conf file and this is the error that I’ve got :

Sorry, this won’t work with just one monitor, the limiting factor will always be the missing prime functions of the virtual gpu’s drm driver.
I think it’s more fruitful to check if someone has some patches for it in the works.