Intel for display + nVidia for CUDA - Optimus bug?

I’m running Linux Mint 18 with KDE5 on a desktop PC.

lspci | grep -i vga
00:02.0 VGA compatible controller: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics Controller (rev 06)
01:00.0 VGA compatible controller: NVIDIA Corporation GM206 [GeForce GTX 960] (rev a1)

I was trying to get the Intel GPU to handle the display, while using dedicated nVidia card for CUDA only (Blender Cycles rendering).
I’ve switched the Intel as primary display in the BIOS, and plugged my monitors into the motherboard.
I’ve modified my X.org config to use “intel” device instead of “nvidia”, leaving “nvidia” device unused by the X.

Here’s my xorg.conf file:

Section "ServerLayout"
    Identifier "layout"
    
    Screen 0 "intel"
    Inactive "nvidia"
    
    #Screen 0 "nvidia"
    #Inactive "intel"
EndSection

Section "Device"
    Identifier "intel"
    Driver "modesetting"
    BusID "PCI:0@0:2:0"
    Option "AccelMethod" "None"
EndSection

Section "Screen"
    Identifier "intel"
    Device "intel"
EndSection

Section "Device"
    Identifier "nvidia"
    Driver "nvidia"
    BusID "PCI:1@0:0:0"
    Option "ConstrainCursor" "off"
EndSection

Section "Screen"
    Identifier "nvidia"
    Device "nvidia"
    Option "AllowEmptyInitialConfiguration" "on"
    Option "IgnoreDisplayDevices" "CRT"
EndSection

I swapped “intel” with “nvidia” in the original pair of lines:

Screen 0 "intel"
    Inactive "nvidia"

I kinda seem like got it to work, but there’s some things that make it really weird:

When X.org starts, Plasma Desktop warns that the GPU doesn’t support OpenGL. Blender won’t run - no GLX extension found. nvidia Optimus panel says I’m using nVidia GPU, not Intel.

X.org log seems to prove X is using Intel GPU:

Information	[  1138.958] (II) intel(0): resizing framebuffer to 3840x1080
Information	[  1138.987] (II) intel(0): switch to mode 1920x1080@60.0 on HDMI3 using pipe 1, position (1920, 0), rotation normal, reflection none

If I switch to Intel GPU from the nVidia Optimus panel, the Plasma Desktop and Blender work (immediately! no X.org restart needed!), but my thermal widget doens’t report Temerature for nVidia anymore.

I tested out CPU vs GPU rendering in Blender and the GPU renders 3~4 times faster than CPU so I guess CUDA is really working.

If I switch the nVidia Optimus back to nVidia GPU, I can’t run blender anymore (without loggin out! it’s the same X.org session!) but the thermal widget displays GPU temperature again!

If I just finished rendering with Blender using CUDA I can see the GPU temperature falling down from 60°C - so the GPU was used, even though nVidia-settings things it’s disabled (it won’t show me the PowerMizer panel or Thermal Control).

Also why is GLX extension not reported to processes when selecting nVidia card from Optimus? And why it’s magically present again when I switch to Intel without restarting X.org server?

I’d expect to be using nVidia GPU, and have GLX extension reported for Intel-driven display device in X.org. Maybe there should be a “Dual” mode? With both GPUs enabled? I know it’s not the primary goal of optimus but this is what I’m doing and nVidia-settings is behaving weird, because it behaves like nVidia GPU is non-existent while I’m actually using it for CUDA.

I’ve rebooted, the situation has changed.

Optimus has Intel GPU enabled, now Blender can’t see the nVidia GPU at all.

After switching to nVidia and restarting X.org, dedicated GPU is detected, but it’s also being used for Display.

I checked my x.org.conf file and looks like it was backed up and replaced by the same config as before - disabling Intel, and using nVidia. How I can keep this operating predictably from between reboots?

I want Intel for display, and nVidia for CUDA, being able to view nVidia GPU’s load and temperature.

This sounds like a Mint-specific configuration problem. I’d suggest seeking support in their user forums.

For the configuration you want, you should install the NVIDIA kernel drivers and CUDA, but not the NVIDIA OpenGL libraries.

This is the list of packages with “nvidia” in name that I have installed.
What should I remove?

$ dpkg -l | grep -i nvidia
ii  bbswitch-dkms                                   0.8-3ubuntu1                               amd64        Interface for toggling the power on NVIDIA Optimus video cards
ii  libcuda1-361                                    361.42-0ubuntu2                            amd64        NVIDIA CUDA runtime library
ii  nvidia-361                                      361.42-0ubuntu2                            amd64        NVIDIA binary driver - version 361.42
ii  nvidia-opencl-icd-361                           361.42-0ubuntu2                            amd64        NVIDIA OpenCL ICD
ii  nvidia-prime                                    0.8.2linuxmint1                            amd64        Tools to enable NVIDIA's Prime
ii  nvidia-prime-applet                             1.0.5                                      all          An applet for NVIDIA Prime
ii  nvidia-settings                                 361.42-0ubuntu1                            amd64        Tool for configuring the NVIDIA graphics driver

I installed nvidia-modprobe and Blender detects the CUDA device now.
Still no thermal info for nVIdia GPU, though.

I’ve searched around and it looks like nvidia-smi output is used for Thermal info - it however returns an error when Intel GPU is used:

# nvidia-smi
NVIDIA-SMI couldn't find libnvidia-ml.so library in your system. Please make sure that the NVIDIA Display Driver is properly installed and present in your system.
Please also try adding directory that contains libnvidia-ml.so to your system PATH.

My system log shows this:

10.10.2016 11:56	citron-grafik	com.ubuntu.ScreenResolution.Mechanism[2852]	update-alternatives: using /usr/lib/nvidia-361-prime/ld.so.conf to provide /etc/ld.so.conf.d/x86_64-linux-gnu_GL.conf (x86_64-linux-gnu_gl_conf) in manual mode
10.10.2016 11:56	citron-grafik	com.ubuntu.ScreenResolution.Mechanism[2852]	update-alternatives: using /usr/lib/nvidia-361-prime/ld.so.conf to provide /etc/ld.so.conf.d/x86_64-linux-gnu_EGL.conf (x86_64-linux-gnu_egl_conf) in manual mode
10.10.2016 11:56	citron-grafik	com.ubuntu.ScreenResolution.Mechanism[2852]	update-alternatives: using /usr/lib/nvidia-361-prime/alt_ld.so.conf to provide /etc/ld.so.conf.d/i386-linux-gnu_GL.conf (i386-linux-gnu_gl_conf) in manual mode
10.10.2016 11:56	citron-grafik	com.ubuntu.ScreenResolution.Mechanism[2852]	update-alternatives: using /usr/lib/nvidia-361-prime/alt_ld.so.conf to provide /etc/ld.so.conf.d/i386-linux-gnu_EGL.conf (i386-linux-gnu_egl_conf) in manual mode
10.10.2016 11:56	citron-grafik	com.ubuntu.ScreenResolution.Mechanism[2852]	Info: the current GL alternatives in use are: ['nvidia-361', 'nvidia-361']
10.10.2016 11:56	citron-grafik	com.ubuntu.ScreenResolution.Mechanism[2852]	Info: the current EGL alternatives in use are: ['nvidia-361', 'nvidia-361']
10.10.2016 11:56	citron-grafik	com.ubuntu.ScreenResolution.Mechanism[2852]	Info: selecting nvidia-361-prime for the intel profile

However the library nvidia-smi coudn’t find is present in my system still:

# locate libnvidia-ml.so
/usr/lib/nvidia-361/libnvidia-ml.so
/usr/lib/nvidia-361/libnvidia-ml.so.1
/usr/lib/nvidia-361/libnvidia-ml.so.361.42
/usr/lib32/nvidia-361/libnvidia-ml.so
/usr/lib32/nvidia-361/libnvidia-ml.so.1
/usr/lib32/nvidia-361/libnvidia-ml.so.361.42

I was able to workaround this using:

# LD_PRELOAD=/usr/lib/nvidia-361/libnvidia-ml.so nvidia-smi

The /usr/bin/nvidia-smi is however a symlink to

/etc/alternatives/x86_64-linux-gnu_nvidia_smi

So I can replace that link with a script that’ll run the real binary with the LD_PRELOAD trick:

#!/bin/bash
LD_PRELOAD=/usr/lib/nvidia-361/libnvidia-ml.so /etc/alternatives/x86_64-linux-gnu_nvidia_smi "$@"

At first I didn’t add the “$@” at the end - but the Thermal Applet returned an error, abviously it’s passing some arguments to nvidia-smi, that were not passed along.

So this is a workaround. I guess it could be made a default behaviour, saving others some headache.

TL;DR

If you want to use Intel GPU for display and nVidia GPU for CUDA with Blender - this is a workaround that I figured out:

  1. Install nvidia-modproble (you’ll need to reboot for this to take effect) so Blender can detect nVidia CUDA device even when the display driver is not used (am I right?).

  2. Replace the /usr/bin/nvidia-smi symlink with the following script:

sudo rm /usr/bin/nvidia-smi
#!/bin/bash
LD_PRELOAD=/usr/lib/nvidia-361/libnvidia-ml.so /etc/alternatives/x86_64-linux-gnu_nvidia_smi "$@"

Remeber to give it right permissions:

sudo chmod +x /usr/bin/nvidia-smi

Now the Thermal Info for nVidia GPU should be available even when Optimus is using Intel GPU for display, and Blender should see nVidia CUDA device.

@unfa, thanks. The workaroun works.
So, do you have any idea about how to set the fan speed of the nvidia card when iGPU as primary display?

By the way, nvidia-modprobe was not necessary in Kubuntu 16.04 with 361 Ubuntu official from repository nvidia driver.

That’s a good question.

The card seems to control the fan speed itself - but I didn’t see it go faster than 48% even at 78°C reported, which scared me a bit. It however slowed the fan down to 0% when the temperature fell down to 39°C.

hi to all,
i followed this post but at the end the result is that i can monitor the card but none of the application that need the nvidia card works. My machine is an intel 6700k with a gtx1060, i want to use the intel gpu for normal use the pc while mining with my nvidia card, my os is ubuntu 16.04 with stock kernel 4.4 Here is the output with EWBF miner:

./miner: error while loading shared libraries: libnvidia-ml.so.1: cannot open shared object file: No such file or directory

i suppose that the problem is the nvidia-modprobe provided with ubuntu repositories that isn’t alligned with other nvidia packages… here is my situation:

dpkg -l | grep nvidia
rc nvidia-367 367.57-0ubuntu0.16.04.1+gpu16.04.1 amd64 NVIDIA binary driver - version 367.57
rc nvidia-375 375.82-0ubuntu0~gpu16.04.1 amd64 NVIDIA binary driver - version 375.82
ii nvidia-384 384.59-0ubuntu0~gpu16.04.1 amd64 NVIDIA binary driver - version 384.59
ii nvidia-modprobe 361.28-1 amd64 utility to load NVIDIA kernel modules and create device nodes
rc nvidia-opencl-icd-367 367.57-0ubuntu0.16.04.1+gpu16.04.1 amd64 NVIDIA OpenCL ICD
rc nvidia-opencl-icd-375 375.39-0ubuntu0.16.04.1 amd64 NVIDIA OpenCL ICD
ii nvidia-opencl-icd-384 384.59-0ubuntu0~gpu16.04.1 amd64 NVIDIA OpenCL ICD
ii nvidia-prime 0.8.2 amd64 Tools to enable NVIDIA’s Prime
ii nvidia-settings 384.59-0ubuntu0~gpu16.04.1 amd64 Tool for configuring the NVIDIA graphics driver

consider that in the past i do some driver upgrades, for this reason i have still installed 367 and 375, btw nvidia-modprobe didn’t match with any driver that i have avaible… any tips?
i need a simply tip to mining with the 1060.

hi to all,
i followed this post but at the end the result is that i can monitor the card but none of the application that need the nvidia card works. My machine is an intel 6700k with a gtx1060, i want to use the intel gpu for normal use the pc while mining with my nvidia card, my os is ubuntu 16.04 with stock kernel 4.4 Here is the output with EWBF miner:

./miner: error while loading shared libraries: libnvidia-ml.so.1: cannot open shared object file: No such file or directory

i suppose that the problem is the nvidia-modprobe provided with ubuntu repositories that isn’t alligned with other nvidia packages… here is my situation:

dpkg -l | grep nvidia
rc nvidia-367 367.57-0ubuntu0.16.04.1+gpu16.04.1 amd64 NVIDIA binary driver - version 367.57
rc nvidia-375 375.82-0ubuntu0~gpu16.04.1 amd64 NVIDIA binary driver - version 375.82
ii nvidia-384 384.59-0ubuntu0~gpu16.04.1 amd64 NVIDIA binary driver - version 384.59
ii nvidia-modprobe 361.28-1 amd64 utility to load NVIDIA kernel modules and create device nodes
rc nvidia-opencl-icd-367 367.57-0ubuntu0.16.04.1+gpu16.04.1 amd64 NVIDIA OpenCL ICD
rc nvidia-opencl-icd-375 375.39-0ubuntu0.16.04.1 amd64 NVIDIA OpenCL ICD
ii nvidia-opencl-icd-384 384.59-0ubuntu0~gpu16.04.1 amd64 NVIDIA OpenCL ICD
ii nvidia-prime 0.8.2 amd64 Tools to enable NVIDIA’s Prime
ii nvidia-settings 384.59-0ubuntu0~gpu16.04.1 amd64 Tool for configuring the NVIDIA graphics driver

consider that in the past i do some driver upgrades, for this reason i have still installed 367 and 375, btw nvidia-modprobe didn’t match with any driver that i have avaible… any tips?
i need a simply tip to mining with the 1060.

I’m in a similar situation right now.

I can’t use Nvidia GPU for display. If I run:

sudo prime-select nvidia
blender

I get this:

/home/sources/blender-release/intern/ghost/intern/GHOST_WindowX11.cpp:303: X11 glXChooseVisual() failed, verify working openGL system!
initial window could not find the GLX extension

I can use CUDA but oly in commandline blender rendering, when Nvidia GPU is selected in Prime.

Before running any Blender process intended for GUI work I need to run

prime-select intel

however, or it’ll not run.

My packages instaled:

$ dpkg -l | grep nvidia
ii  bumblebee-nvidia                                3.2.1-10                                         amd64        NVIDIA Optimus support using the proprietary NVIDIA driver
rc  nvidia-340                                      340.102-0ubuntu0.16.04.2+gpu16.04.1              amd64        NVIDIA binary driver - version 340.102
rc  nvidia-375                                      375.66-0ubuntu0.16.04.1                          amd64        NVIDIA binary driver - version 375.66
ii  nvidia-384                                      384.59-0ubuntu0~gpu16.04.1                       amd64        NVIDIA binary driver - version 384.59
ii  nvidia-cuda-dev                                 7.5.18-0ubuntu1                                  amd64        NVIDIA CUDA development files
ii  nvidia-cuda-toolkit                             7.5.18-0ubuntu1                                  amd64        NVIDIA CUDA development toolkit
ii  nvidia-modprobe                                 361.28-1                                         amd64        utility to load NVIDIA kernel modules and create device nodes
ii  nvidia-opencl-dev:amd64                         7.5.18-0ubuntu1                                  amd64        NVIDIA OpenCL development files
rc  nvidia-opencl-icd-361                           367.57-0ubuntu0.16.04.1                          amd64        Transitional package for nvidia-opencl-icd-367
rc  nvidia-opencl-icd-375                           375.66-0ubuntu0.16.04.1                          amd64        NVIDIA OpenCL ICD
ii  nvidia-prime                                    0.8.2linuxmint1                                  amd64        Tools to enable NVIDIA's Prime
ii  nvidia-profiler                                 7.5.18-0ubuntu1                                  amd64        NVIDIA Profiler for CUDA and OpenCL
ii  nvidia-settings                                 384.59-0ubuntu0~gpu16.04.1                       amd64        Tool for configuring the NVIDIA graphics driver

hi, i see that u too have the nvidia-modprobe package version unalligned with the other nvidia packeges… so I assume that the problem isn’t there…

in my case, if I do prime-select nvidia the result is that everything works fine but I am again at the starting point because the nvidia card pilots the video output also if the dvi cable is aattached into the planar… so everything works but if I mine the pc becomes near to unusable because is too slow… so I am back at the starting point…

my consideration is that, of course when setting again prime to intel, the nvidia-smi with the modded file indicated before works perfectly… so maybe will be sufficient to find a way to do a sylink for example to libnvidia-ml.so.1 in a way that can work with the needed application.

I have found a solution that at least for my case seems to be 100% OK:

of course previosuly i have installed nvidia driver 384 and after I have enabled the integrated gpu on the bios, attached video cable to motherboard, installed nvidia-modprobe (mmm maybe modprobe is not really needed) and then set prime to intel and reboot.
dirver and modprobe are installed via ubuntu repositories (ppa for closed source gpu drivers).

as say above in this situation I cannot launch application that needs nvidia/cuda.

but to fix this is sufficient to follow this simple tip:

  • open a terminal and launch: export LD_LIBRARY_PATH=/usr/lib/nvidia-384
  • then launch the application that needed the nvidia… et voila :D in my case this tip works perfectly, now I can do computing/mining with my gtx1060 while use the pc without impact on gui performance… now I can also play some games that can run with intel gpu while mining with nvidia gpu… this is really cool :D

with the tip above, it is also possible to lanuch the original un-modded nvidia-smi to monitor the card.
I hope this helps other users

I did so, but this is what I get:

$ sudo prime-select nvidia
Info: the current GL alternatives in use are: ['nvidia-384-prime', 'nvidia-384-prime']
Info: the current EGL alternatives in use are: ['nvidia-384-prime', 'nvidia-384-prime']
Info: selecting nvidia-384 for the nvidia profile
update-alternatives: using /usr/lib/nvidia-384/ld.so.conf to provide /etc/ld.so.conf.d/x86_64-linux-gnu_GL.conf (x86_64-linux-gnu_gl_conf) in manual mode
update-alternatives: using /usr/lib/nvidia-384/ld.so.conf to provide /etc/ld.so.conf.d/x86_64-linux-gnu_EGL.conf (x86_64-linux-gnu_egl_conf) in manual mode
update-alternatives: using /usr/lib/nvidia-384/alt_ld.so.conf to provide /etc/ld.so.conf.d/i386-linux-gnu_GL.conf (i386-linux-gnu_gl_conf) in manual mode
update-alternatives: using /usr/lib/nvidia-384/alt_ld.so.conf to provide /etc/ld.so.conf.d/i386-linux-gnu_EGL.conf (i386-linux-gnu_egl_conf) in manual mode

citron@citron-grafik ~ $ export LD_LIBRARY_PATH=/usr/lib/nvidia-384

citron@citron-grafik ~ $ blender
Read prefs: /home/citron/.config/blender/2.79/config/userpref.blend
Error! Blender requires OpenGL 2.1 to run. Try updating your drivers.

if you try to do:
sudo prime-select intel
reboot
export LD_LIBRARY_PATH=/usr/lib/nvidia-384
blender

what happened?

I also see that you have bublebee installed maybe this is changing something

EDIT! finally ‘PRIME select’ tab appears in “nVidia X server settings” where I can choose between power saving mode (Intel) and performance mode (nVidia) - in blender nVidia GPU is found so everything works, only the fact that after each switch it needs logout from the user session and login which in Windows it doeasn’t need such moves.

Hi everyone, I was just searching for this topic around the net and finally found this thread…
I have Ubuntu 17.04 64bit, laptop with i7/Intel GPU integrated and nVidia GT740M as sacond GPU.

I have installed all recent drivers both Intel (via https://01.org/linuxgraphics/downloads/intel-graphics-update-tool-linux-os-v2.0.5) and nVidia (via PPA graphics-drivers) + Cuda toolkit + unmatched version of nvidia-modprobe from standard Ubuntu repositories.

the issue I try to fix is that after all installs I have no any e.g xorg.conf,

lspci | grep -i vga

shows only Intel graphics integrated,

prime-select query

shows “unkown”, nvidia-settings shows only its window but with no configuration inside.

Reinstalling video drivers or switching to other versions doesn’t change the situation.

I can’t choose or enable/disable any graphics card via bios mode but could disable secure boot there.

I would like to work standard with Intel GPU and use nVidia in Blender to unlock CUDA GPU but can only work everywhere with Intel.

btw I found latest nvidia-modprobe 384.59 and built it from source but it broke my system so had to uninstall it.

I made a fresh Linux Mint 18.2 installation, installed Nvidia 375 drivers, nvidia-cuda-toolkit and nvidia-modprobe.

I choose Intel GPU with Nvidia PRIME, and restarted X.org.

I can get nvidia-smi to work when I give it an Nvidia driver library path:

export LD_LIBRARY_PATH=/usr/lib/nvidia-375; nvidia-smi

I substituted /usr/bin/nvidia-smi with this script to get that working for all driver versions:

#!/bin/bash
export LD_LIBRARY_PATH=/usr/lib/nvidia-375; /etc/alternatives/x86_64-linux-gnu_nvidia_smi "$@"

Now when I try to run Blender with the same “export LD_LIBRARY_PATH” I get:

Error! Blender requires OpenGL 2.1 to run. Try updating your drivers.

The silly fact is, if I do this:

LD_LIBRARY_PATH=/usr/lib/nvidia-375/ ./blender -b -a

Which runs Blender in headless mode and renders an animation - I can see in nvidia-smi that it’s using the CUDA rendering.

Maybe If I set up a network renderer on my local machine and did all renders via that - I might be able to actually work this way.

If I could make Blender detect Nvidia CUDA and run now, while Intel is used for display, I would be totally happy. I’d make a custom script in /usr/bin/blender to make it work system-wide and I’d be done.

Does anyone has an idea what can I do to make it work like this? Any specific LD_PRELOAD or something?

For some reason when I give Blender the Nvidia driver path it can’t use OpenGL - but can use CUDA!

PS: I tried setting up a network rendering chain but it failed every time so I certainly can’t use that in production right now.

in brief, we have done some progress but a 100% flexible solution for now seem to be unavaible.
me too I found a problem using nvidia-settings to control the fan of my card:
:~$ LD_LIBRARY_PATH=/usr/lib/nvidia-384/ nvidia-settings -a [fan:0]/GPUTargetFanSpeed=80

ERROR: Error querying enabled displays on GPU 0 (Missing Extension).

ERROR: Error querying connected displays on GPU 0 (Missing Extension).

ERROR: Error resolving target specification ‘fan:0’ (No targets match target
specification), specified in assignment ‘[fan:0]/GPUTargetFanSpeed=80’.