Hello to everyone.
what Im trying to do is to set the framebuffer video adapter as primary graphic card on my bhyve-ubuntu vm instead of the nvidia RTX 2080 ti card that I have passed through. What I want to do really is to use both the graphic adapters,but the primary should be the framebuffer and the secondary the nvidia.I suspect that I need to use the intel graphic adapter to apply the “NVIDIA’s PRIME render offload” configuration,which I think is what I need actually. The problem is that at the moment I can’t use two monitors. So,my goal is the same goal explained on the official nvidia website :
https://download.nvidia.com/XFree86/Linux-x86_64/435.17/README/primerenderoffload.html
where we can read :
PRIME render offload is the ability to have an X screen rendered by one GPU, but choose certain applications within that X screen to be rendered on a different GPU.
This is particularly useful in combination with dynamic power management to leave an NVIDIA GPU powered off, except when it is needed to render select performance-sensitive applications.“To use NVIDIA’s PRIME render offload support, configure the X server with an X screen using an integrated GPU with the xf86-video-modesetting X driver”
if I don’t get wrong,if I can accomplish that,I can use one only monitor,where I will see Linux loaded as a bhyve VM inside a window /smaller than the size of my screen/ and at the same time I will use my RTX 2080 ti for experimenting with “stable diffusion” without using another monitor. That’s because it needs a powerful graphic card to work.
What I tried to do right now ? I tried to apply the Xorg configuration explained in the nvidia web site,but what I’ve got is that Xorg failed to display some errors.
So,the controller that you see below should be used as primary inside the ubuntu vm :
-s 6,fbuf,tcp=0.0.0.0:5919,w=1600,h=950,wait \
while the ones you see below as secondary :
02:00.0 VGA compatible controller: NVIDIA Corporation TU102 [GeForce RTX 2080 Ti] (rev a1)
02:00.1 Audio device: NVIDIA Corporation TU102 High Definition Audio Controller (rev a1)
02:00.2 USB controller: NVIDIA Corporation TU102 USB 3.1 Host Controller (rev a1)
02:00.3 Serial bus controller: NVIDIA Corporation TU102 USB Type-C UCSI Controller (rev a1)
The script that I use to launch the vm is the following :
bhyve -S -c sockets=1,cores=2,threads=2 -m 4G -w -H -A \
-s 0,hostbridge \
-s 2,virtio-blk,/mnt/$vmdisk1'p2'/bhyve/img/Linux/ubuntu2210.img,bootindex=1 \
-s 3,virtio-blk,/dev/$vmdisk4 \
-s 4,virtio-blk,/dev/$vmdisk2 \
-s 6,fbuf,tcp=0.0.0.0:5919,w=1600,h=950,wait \
-s 8:0,passthru,2/0/0 \
-s 8:1,passthru,2/0/1 \
-s 8:2,passthru,2/0/2 \
-s 8:3,passthru,2/0/3 \
-s 10,virtio-net,tap19 \
-s 11,virtio-9p,sharename=/ \
-s 30,xhci,tablet \
-s 31,lpc \
-l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI_CODE.fd \
vm0:19 < /dev/null & sleep 2 && vncviewer 0:19
On /boot/loader.conf I’ve added :
pptdevs="2/0/0 2/0/1 2/0/2 2/0/3"
Inside the ubuntu guest os,the passed through graphic adapters sounds like these ones :
00:06.0 VGA compatible controller: Device fb5d:40fb
00:08.0 VGA compatible controller: NVIDIA Corporation TU102 [GeForce RTX 2080 Ti] (rev a1)
00:08.1 Audio device: NVIDIA Corporation TU102 High Definition Audio Controller (rev a1)
00:08.2 USB controller: NVIDIA Corporation TU102 USB 3.1 Host Controller (rev a1)
08:00.3 Serial bus controller: NVIDIA Corporation TU102 USB Type-C UCSI Controller (rev a1)
As I said before,I tried theo xorg conf file suggested by the nvidia website to achieve the goal :
Section "ServerLayout"
Identifier "layout"
Screen 0 "iGPU"
EndSection
Section "Device"
Identifier "iGPU"
Driver "modesetting"
BusID "PCI:0:6:0
EndSection
Section "Screen"
Identifier "iGPU"
Device "iGPU"
EndSection
Section "ServerLayout"
Identifier "layout"
Option "AllowNVIDIAGPUScreens"
EndSection
but it didn’t work. This is the log file that shows the errors reported :
Anyway,there is something that works as expected,according with the nvidia website :
xrandr --listproviders
Providers: number : 1
Provider 0: id: 0x1b7 cap: 0x0 crtcs: 4 outputs: 8 associated providers: 0 name:NVIDIA-0
and :
nvidia-smi
Tue Dec 6 16:34:35 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.86.01 Driver Version: 515.86.01 CUDA Version: 11.7 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:00:08.0 Off | N/A |
| 29% 26C P8 20W / 250W | 1MiB / 11264MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
lsmod | grep nvidia-drm
nothing
dmesg | grep nvidia-drm
[ 2.927164] [drm] [nvidia-drm] [GPU ID 0x00000008] Loading driver
[ 4.743168] [drm] Initialized nvidia-drm 0.0.0 20160202 for 0000:00:08.0 on minor 0
According to the logs it looks like it selects the framebuffer device as primary graphics in the first place:
(--) PCI:*(0@0:6:0) ...
(--) PCI: (0@0:8:0) ...
it means that my problem is not related to the hypervisor configuration anymore,but it depends about how is configured the guest / ubuntu os. So I’m sure that this question
belongs to a support place like ubuntu and / or nvidia forum. More nvidia than ubuntu.
You can find the dmsg messages here : Ubuntu Pastebin
Instead here you can give a look at the xorg log file : Ubuntu Pastebin
The nvidia website says also :
Also, confirm that the xf86-video-modesetting X driver is using “glamoregl”. The log file /var/log/Xorg.0.log should contain something like this…
but I don’t see the word “glamoregl” between the messages located on the file /var/log/Xorg.0.log
If I use ONLY the framebuffer argument,it works : the desktop manager is loaded within the vm window,but if between the bhyve parameters I declare the framebuffer AND the nvidia slots,on the vm window I see the blinking pointer with this error : nvidiafb: unknown NV_ARCH and the physical monitor is turned off.
Is the framebuffer used by bhyve / this : *fbuf,tcp=0.0.0.0:5919,w=1600,h=950,wait * / considered as a “integrated GPU with the xf86-video-modesetting X driver” ?