Problem with offloading with PRIME

OK. So i have set everything up as per the arch wiki and the nvidia docs http://us.download.nvidia.com/XFree86/Linux-x86_64/430.14/README/randr14.html and if i execute:

glxinfo | grep vendor

I get

server glx vendor string: SGI
client glx vendor string: Mesa Project and SGI
OpenGL vendor string: Intel Open Source Technology Center

This is the expected response. However if i try prime offloading using

__NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia glxinfo | grep vendor

I get

X Error of failed request:  BadValue (integer parameter out of range for operation)
  Major opcode of failed request:  152 (GLX)
  Minor opcode of failed request:  24 (X_GLXCreateNewContext)
  Value in failed request:  0x0
  Serial number of failed request:  39
  Current serial number in output stream:  40

Yet if I lsmod i can see that the nvidia drivers are indeed loaded.

nvidia_drm             49152  1
drm_kms_helper        212992  2 nvidia_drm,i915
drm                   516096  15 drm_kms_helper,nvidia_drm,i915
nvidia_uvm           1085440  0
nvidia_modeset       1114112  1 nvidia_drm
nvidia              19980288  2 nvidia_uvm,nvidia_modeset
ipmi_msghandler        69632  2 ipmi_devintf,nvidia

FWIW. I initially had my laptop setup to only use the built in nvidia card but it was draining my battery too quickly.

Also the output from nvidia-smi is

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.36       Driver Version: 440.36       CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 1050    Off  | 00000000:01:00.0 Off |                  N/A |
| N/A   49C    P0    N/A /  N/A |      0MiB /  4042MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

nvidia-bug-report.log.gz (368 KB)

Please run nvidia-bug-report.sh as root and attach the resulting .gz file to your post. Hovering the mouse over an existing post of yours will reveal a paperclip icon.
https://devtalk.nvidia.com/default/topic/1043347/announcements/attaching-files-to-forum-topics-posts/

Done

Please remove
/usr/share/X11/xorg.conf.d/10-nvidia-drm-outputclass.conf
/etc/X11/xorg.conf.d/10-nvidia-drm-outputclass.conf
If this doesn’t resolve the issue, please create a new nvidia-bug-report.log.

I am afriad that didn’t work either
nvidia-bug-report.log.gz (364 KB)

The Xserver is always adding the igpu twice instead of the nvidia gpu. Please check if this xorg.conf works:

Section "ServerLayout"
  Identifier "layout"
  Screen 0 "iGPU"
  Option "AllowNVIDIAGPUScreens"
EndSection

Section "Device"
  Identifier "iGPU"
  Driver "modesetting"
  BusID "PCI:0:2:0"
EndSection

Section "Screen"
  Identifier "iGPU"
  Device "iGPU"
EndSection

Section "Device"
  Identifier "nvidia"
  Driver "nvidia"
  BusID "PCI:1:0:0"
EndSection

Xorg had a section order bug at some time.
Also, please post the output of
ls -l /etc/X11

Edit: typo fixed.

I seem to be getting a different error now.

Error of failed request:  BadValue (integer parameter out of range for operation)
  Major opcode of failed request:  156 (NV-GLX)
  Minor opcode of failed request:  6 ()
  Value in failed request:  0x0
  Serial number of failed request:  84
  Current serial number in output stream:  84

Output from ls -l /etx/X11

total 20K
drwxr-xr-x 3 root root 4.0K Sep  3  2018 xinit
drwxr-xr-x 2 root root 4.0K Dec 11 13:38 xorg.conf.d
-rw-r--r-- 1 root root  375 Dec 11 16:12 xorg.conf
-rwxr-xr-x 1 root root  685 Dec 10 14:57 xorg.conf.back
-rwxr-xr-x 1 root root  756 Sep  3  2018 xorg.conf.nvidia

nvidia-bug-report.log.gz (382 KB)

Ok, looks better, just the sub-module couldn’t be found

[   155.969] (II) LoadModule: "glxserver_nvidia"
[   155.969] (WW) Warning, couldn't open module glxserver_nvidia
[   155.969] (EE) NVIDIA: Failed to load module "glxserver_nvidia" (module does not exist, 0)

Please add to the xorg.conf

Section "Files"
    ModulePath "/usr/lib/xorg/modules, /usr/lib/nvidia/xorg"
EndSection

Furthermore, please remove the files (or rename them)
xorg.conf.back
xorg.conf.nvidia
Some Xserver version have the bug that anything that begins with “xorg.conf” will be taken into account for configuration.

This finally solved the problem. Although i had to put the “Files” Section right at the beginning of xorg.conf So this is how the working file looks now

Section "Files"
    ModulePath "/usr/lib/xorg/modules"
    ModulePath "/usr/lib/nvidia/xorg"
EndSection

Section "ServerLayout"
    Identifier "layout"
    Screen 0 "iGPU"
    Option "AllowNVIDIAGPUScreens"
EndSection

Section "Device"
    Identifier "iGPU"
    Driver "modesetting"
    BusID "PCI:0:2:0"
EndSection

Section "Screen"
    Identifier "iGPU"
    Device "iGPU"
EndSection

Section "Device"
    Identifier "nvidia"
    Driver "nvidia"
    BusID "PCI:1:0:0"
EndSection

Finally. Thank you for helping me out
nvidia-bug-report.log.gz (379 KB)