Llvmpipe (LLVM 10.0.0, 256 bits) is getting detected instead of NVIDIA

I have Nvidia-515.48.07 but llvmpipe (LLVM 10.0.0, 256 bits) is getting detected (Red hat 7.6)
nvidia-bug-report.log.gz (363.9 KB)

I am using the kernenl version, 3.10.0-1160.71.1.el7.x86_64 (Red hat 7)

[root@skkugeoth1 xorg.conf.d]# nvidia-smi
Wed Aug 3 13:37:02 2022
| NVIDIA-SMI 515.48.07 Driver Version: 515.48.07 CUDA Version: 11.7 |
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
| 0 Tesla T4 Off | 00000000:08:00.0 Off | 0 |
| N/A 48C P8 16W / 70W | 2MiB / 15360MiB | 0% Default |
| | | N/A |

| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
| No running processes found |

[root@skkugeoth1 xorg.conf.d]# glxinfo | grep -i opengl
OpenGL vendor string: VMware, Inc.
OpenGL renderer string: llvmpipe (LLVM 7.0, 256 bits)
OpenGL version string: 2.1 Mesa 18.3.4
OpenGL shading language version string: 1.20
OpenGL extensions:
OpenGL ES profile version string: OpenGL ES 2.0 Mesa 18.3.4
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 1.0.16
OpenGL ES profile extensions:

[root@skkugeoth1 xorg.conf.d]# lshw -C display
description: 3D controller
product: TU104GL [Tesla T4]
vendor: NVIDIA Corporation
physical id: 0
bus info: pci@0000:08:00.0
version: a1
width: 64 bits
clock: 33MHz
capabilities: pm bus_master cap_list
configuration: driver=nvidia latency=0
resources: iomemory:39f0-39ef iomemory:39f0-39ef irq:16 memory:93000000-93ffffff memory:39fc0000000-39fcfffffff memory:39ff0000000-39ff1ffffff memory:94000000-943fffff memory:39ec0000000-39fbfffffff memory:39fd0000000-39fefffffff
description: VGA compatible controller
product: MGA G200EH
vendor: Matrox Electronics Systems Ltd.
physical id: 0.1
bus info: pci@0000:01:00.1
version: 01
width: 32 bits
clock: 33MHz
capabilities: pm msi pciexpress vga_controller bus_master cap_list rom
configuration: driver=mgag200 latency=0
resources: irq:19 memory:91000000-91ffffff memory:92a88000-92a8bfff memory:92000000-927fffff

The bug report is attachedd.

similar problem, but I even cannot detect nvidia.

when I use ubuntu-drivers devices

== /sys/devices/pci0000:00/0000:00:04.0 ==
modalias : pci:v000080EEd0000CAFEsv00000000sd00000000bc08sc80i00
vendor : InnoTek Systemberatung GmbH
model : VirtualBox Guest Service
manual_install: True
driver : virtualbox-guest-dkms-hwe - distro non-free
driver : virtualbox-guest-dkms - distro non-free

== /sys/devices/pci0000:00/0000:00:02.0 ==
modalias : pci:v000015ADd00000405sv000015ADsd00000405bc03sc00i00
vendor : VMware
model : SVGA II Adapter
manual_install: True
driver : open-vm-tools-desktop - distro free

I disable secure boot, but still not work.

In setting - About

when I use nvidia-smi
NVIDIA-SMI has failed because it couldn’t communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.

whenuse lshw -c display
WARNING: you should run this program as super-user.
description: VGA compatible controller
product: SVGA II Adapter
vendor: VMware
physical id: 2
bus info: pci@0000:00:02.0
version: 00
width: 32 bits
clock: 33MHz
capabilities: vga_controller bus_master rom
configuration: driver=vmwgfx latency=64
resources: irq:18 ioport:d010(size=16) memory:e0000000-e3ffffff memory:f0000000-f01fffff memory:c0000-dffff
WARNING: output may be incomplete or inaccurate, you should run this program as super-user.

I am not sure how to generate the bug-report.

I also try to use
sudo prime-select on-demand
Error: no integrated GPU detected.

I copy and paste the bug-report from ubuntu to a .txt file, I am not sure whether it can be use or not

nvidia-bug-report.log (1.3 MB)

(post deleted by author)

The desktop versions of vmware/virtualbox/hyper-v don’t support pci passthrough.

Hi generix,
you answer me or him?
so, I need to use another version of virtualbox, is that right

Hi, generix. Please give me direction to csyoo on top. Thanks!

@csyoo the xserver is runing on the on-board matrox graphics, please connect the monitor to the nvidia gpu and create an xorg.conf to use it.

Section "Device"
    Identifier     "nvidia"
    Driver         "nvidia"
    BusID          "PCI:8:0:0"
    Option         "AllowEmptyInitialConfiguration"

You would have to use a completely different base system installed on the bare-metal hardware, e.g. Windows Server Hyper-V, Linux-KVM, VMware esxi.
Running a simple Windows system doesn’t make this possible.

@generix, two questions. 1) Does Tesla T4 has the monitor connection (RGB)? 2) how to create xorg.conf after connecting the monitor to my Nvidia T4? Thanks!

The T4 doesn’t have any monitor outputs. Sorry, I didn’t look after what model you were using. It will use a virtual monitor (NVIDIA VGX) when the xserver is configured to use it. It can then be used over X11vnc/vnc0server or the like.
What kind of setup are you trying to run?

I need to change Nvidia setting to “Performance mode” so that the system can detect Nvidia instead of LIvmpipe… Now in the “Nvidia Setting”, the tab “Prime profile” is not shown, from which I can select the GPU I would like to use. Simply, I am having a problem in rendering with my S/W as the LIvmpipe is getting detected instead of Nvidia. Thanks!

This doesn’t work with a Matrox, the driver of that is very simple and doesn’t support any prime functions.

What would be the solution, then? Should I change the on-board graphic card for my monitor? Please list possible solutions to handle the problem. Thanks!

Hi, generix
I tried to use VMware, but the same problem happened.
Meanwhile, someone told me that only linux host will have pci passthrough, the window host will not.

@csyoo depending on your use case, try using bumblebee or virtualgl. Or use a different graphics card.

@ruyik vmware ESXI aka vsphere. Like said, a comsumer Windows can’t do passthrough, only Windows Server versions with hyper-v.

Thanks, generix. Can you instruct me about how to install virtualgal?

usually, it’s an install and go.