2060m, ArchLinux, Xorg, Prime installed -> super low performance everywhere

Tried distros: Arch, Artix (arch derivative), Manjaro. KDE, all updates, fresh installs.

Prime works I can even play Steam->Squad, but in any game or UnrealEngine5/Godot Engine I have super low fps. Squad shows 30-45 on low FullHD settings. Mangohud uninstalled MANGOHUD_DLSYM=0 MANGOHUD=0. There are no gamemode or another boosters I don’t use them. Drivers are from pacman official repos, no git versions.

Fresh install, no xf86-video-intel or nvidia driver and this:
vblank_mode=0 __GL_SYNC_TO_VBLANK=0 godot (Yes, on intel) shows 3-4k fps on a completely empty default project (vsync off of course). GPU 99% temp 75%. After installing xf86-video-intel, nvidia driver, xorg.conf (or without it) it shows 300-900 fps. NVIDIA Prime shows much lower fps.
So I just can’t work. Please help me!

Don’t know what information will be helpful so just guessing here. And I can’t find a spoiler function so sorry for a long copy-paste:

Compositor: clear Xorg session with the same tests, KDE Compositor On/Off - doesn’t matter.

grep nvi /etc/mkinitcpio.conf
MODULES=(lz4 nvidia nvidia_modeset nvidia_uvm nvidia_drm)

lsmod| grep nv
nvme_core 155648 2 nvme
nvidia_drm 73728 4
nvidia_uvm 2609152 0
nvidia_modeset 1163264 2 nvidia_drm
nvidia 39153664 97 nvidia_uvm,nvidia_modeset

/etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT=“loglevel=3 quiet nvidia-drm.modeset=1” (doesn’t matter at all)

glxinfo | grep -vE "(0x|GLX_|GL_)"
ATTENTION: default value of option vblank_mode overridden by environment.
ATTENTION: default value of option mesa_glthread overridden by environment.
name of display: :0
display: :0 screen: 0
direct rendering: Yes
server glx vendor string: SGI
server glx version string: 1.4
client glx vendor string: Mesa Project and SGI
client glx version string: 1.4
GLX version: 1.4
Version: 22.0.1
Accelerated: yes
Video memory: 3072MB
Unified memory: yes
Max core profile version: 4.6
Max compat profile version: 4.6
Max GLES1 profile version: 1.1
Max GLES[23] profile version: 3.2
OpenGL vendor string: Intel
OpenGL renderer string: Mesa Intel(R) Xe Graphics (TGL GT2)
OpenGL core profile version string: 4.6 (Core Profile) Mesa 22.0.1
OpenGL core profile shading language version string: 4.60
OpenGL core profile context flags: no-error
OpenGL core profile profile mask: core profile

OpenGL version string: 4.6 (Compatibility Profile) Mesa 22.0.1
OpenGL shading language version string: 4.60
OpenGL context flags: no-error
OpenGL profile mask: compatibility profile

OpenGL ES profile version string: OpenGL ES 3.2 Mesa 22.0.1
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20

xrandr --listproviders
Providers: number : 2
Provider 0: id: 0x48 cap: 0xf, Source Output, Sink Output, Source Offload, Sink Offload crtcs: 4 outputs: 6 associated providers: 0 name:modesetting
Provider 1: id: 0x272 cap: 0x0 crtcs: 0 outputs: 0 associated providers: 0 name:NVIDIA-G0

./xrun.sh glxinfo | grep -vE "(0x|GLX_|GL_)"
name of display: :0
display: :0 screen: 0
direct rendering: Yes
server glx vendor string: NVIDIA Corporation
server glx version string: 1.4
client glx vendor string: NVIDIA Corporation
client glx version string: 1.4
client glx extensions:
GLX version: 1.4
Dedicated video memory: 6144 MB
Total available memory: 6144 MB
Currently available dedicated video memory: 5927 MB
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: NVIDIA GeForce RTX 2060 with Max-Q Design/PCIe/SSE2
OpenGL core profile version string: 4.6.0 NVIDIA 510.60.02
OpenGL core profile shading language version string: 4.60 NVIDIA
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile

OpenGL version string: 4.6.0 NVIDIA 510.60.02
OpenGL shading language version string: 4.60 NVIDIA
OpenGL context flags: (none)
OpenGL profile mask: (none)

OpenGL ES profile version string: OpenGL ES 3.2 NVIDIA 510.60.02
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20

cat /etc/X11/xorg.conf.bak (doesn’t matter at all)
Section “ServerLayout”
Identifier “layout”
Screen 0 “intel”
Inactive “nvidia”
Option “AllowNVIDIAGPUScreens”
EndSection

Section “Device”
Identifier “nvidia”
Driver “nvidia”
BusID “PCI:2e:0:0”
Option “AddARGBGLXVisuals” “true”
EndSection

Section “Screen”
Identifier “nvidia”
Device “nvidia”
EndSection

Section “Device”
Identifier “intel”
Driver “intel”
BusID “PCI:0:2:0”
EndSection

Section “Screen”
Identifier “intel”
Device “intel”
EndSection

xrun.sh
export __VK_LAYER_NV_optimus=“NVIDIA_only”
export VK_ICD_FILENAMES="/usr/share/vulkan/icd.d/nvidia_icd.json"
export __NV_PRIME_RENDER_OFFLOAD=1
export __NV_PRIME_RENDER_OFFLOAD_PROVIDER=“NVIDIA-G0”
export __GLX_VEND`OR_LIBRARY_NAME=“nvidia”
export __GL_SYNC_TO_VBLANK=0
export vblank_mode=0

exec “$@”

./xrun.sh glxgears: 4600-4700 fps, nvidia-smi: N/A 59C P0 37W / N/A

Please run nvidia-bug-report.sh as root with the gpu under load and attach the resulting nvidia-bug-report.log.gz file to your post.

sudo nvidia-bug-report.sh while playing Squad on low settings. I’ve found a spoiler but it can’t hold all the text.

That’s quite odd, Squad uses up all VRAM so the driver falls back to using system memory and the gpu is doing heavy work transfering data over the pcie bus. Squad is from 2015, you’re using ‘low’ graphics settings, so I wouldn’t expect this being so taxing. There’s something going awry but I don’t know why.
Does this also happen when you set the nvidia as primary to check whether this depends on render offload?

Can it be done with the xorg.conf? I don’t know how.

If you want to set the nvidia gpu as primary, create
/etc/X11/xorg.conf.d/10-nvidia-primary.conf

Section "OutputClass"
    Identifier "nvidia"
    MatchDriver "nvidia-drm"
    Driver "nvidia"
    Option "PrimaryGpu" "yes"
EndSection

then do this:
https://wiki.archlinux.org/title/NVIDIA_Optimus#Display_managers

The file wasn’t created, nvidia gpu is still in offload mode.

Starting Xorg with

startx /bin/startplasma-x11

To avoid interfering with sddm.

Changed xrun.sh to use export __NV_PRIME_RENDER_OFFLOAD=0
Actually, with 0 value I have much worse situation: 2-5 fps.

If it’s matters I ran it without xorg.conf, but with 10-nvidia-primary.conf
I just don’t know how to change it to be able to use 10-nvidia-primary.conf.
nvidia-bug-report.log.gz

I dpn’t know where you put that file but it’s not used. Please not the correct path

/etc/X11/xorg.conf.d/10-nvidia-primary.conf

-rw------- 1 root root 118 2022-04-22 08:16 /etc/X11/xorg.conf.d/10-nvidia-primary.conf

Changed to chmod 644 and it has been read.
Can’t load the file it stucks on “processing upload”.

Please change permissions to 644.

Updated the last comment.

Not picked up, directory permissions?

ls -l /etc/X11/
total 20
drwxr-xr-x 2 root root 4096 2022-04-21 11:14 tigervnc/
drwxr-xr-x 3 root root 4096 2022-04-21 16:03 xinit/
drwxr-xr-x 2 root root 4096 2022-04-22 08:16 xorg.conf.d/
-rw-r–r-- 1 root root 695 2022-04-21 17:32 xorg.conf.bak
-rw-r–r-- 1 root root 52 2022-04-21 17:32 Xwrapper.config

And there is only 10-nvidia-primary.conf inside it

I don’t know what’s wrong with your system, the file is neither seen by the Xserver nor by nvidia-bug-report.log.

I’m sorry, forgot about some env variables.
Arma 3 works very well on Ultra settings! And the most important that the game engines work very well!

BUT how to revert back to Intel as main card with full nvidia performance?
Because it breaks all the thing for me: “NVIDIA Optimus”.

Are You alive there? You did nothing… and just disappeared.

This is a user2user forum so you should understand I have my own matters to take care of. Besides, it’s weekend. A bit more decency might suit you well.
How to revert? Simply remove the file, should be obvious.
The problem in offload mode is that insane amounts of data are transfered to the nvidia gpu

Tx Throughput : 791000 KB/s
Rx Throughput : 3051000 KB/s

What makes things worse is your nvidia gpu is only living on a x4 pcie connection.

In nvidia primary mode, transfers are still high

Tx Throughput : 343000 KB/s
Rx Throughput : 2087000 KB/s

but seemingly low enough to not cause slowdowns.
The cause of this is unknown to me, you might try a470 driver to check whether this is a regression.

Try using envycontrol, just remove all changes you’ve made first

yay -S envycontrol
sudo envycontrol -s nvidia