Setting up a linux gpu dev box with integrated graphics driving the display

So I just setup my new dev box with the integrated graphics driving the display. This is important since you generally don’t want to debug on the same gpu as is driving the display and your system UI wont get bogged down when running gpu kernels. This is also known as a “headless” gpu config. I figured I’d share this knowledge as it was non-trivial to figure out.

I installed linux without any gpu cards installed and with the display connected to the Intel integrated graphics.

You can now install your gpus, but make sure the bios is configured to specifically use the integrated graphics (not in auto mode).

Download the latest runfile graphics driver (and not a package install).

Run it like this:

sudo ./ --no-opengl-files --no-x-check --disable-nouveau

–no-opengl-files is the key option that makes this work. We don’t want to overwrite the Intel opengl setup which is important for compiz.
–no-x-check allows us to install the driver without stopping X.
–disable-nouveau is there just in case you do want to drive the display from a gpu at some point.

When prompted don’t configure X (we don’t want to mess with the current X config).
If you’re under UEFI have the installer generate a key pair for you and save those for future driver updates. Put the pub key on a usb stick and append it to the DB store in your bios.

Now with the driver installed, you need some way to load it that isn’t through X. You can use an nvidia-docker instance, but a much lighter weight option is to use the persistence daemon:

There’s a super simple installer contained in /usr/share/doc/NVIDIA_GLX-1.0/sample/nvidia-persistenced-init.tar.bz2. I’d also add the --persistence-mode to the daemon init script. That extra flag seems to make the driver a bit snappier to engage. The persistence daemon works just fine with consumer GPUs (despite the claim otherwise).

Now you can install cuda, but don’t install the driver included in that package.

I’ve never used the persistence deamon (what does it do anyway?), but for many years we’ve been using a combination of init/upstart/systemd scripts to manage the NVIDIA devices (create dev nodes, load/unload/reload kernel modules, set persistence mode, application clocks and whatever else one may want) as well minimal X servers for headless configs to be able to override the silly “acoustic optimization” i.e. crank up the fan speeds using nvidia-settings.

The persistent daemon just keeps the driver loaded when there are no other applications using it. If you don’t want X to be displayed through the nvidia card, then you need some other mechanism to load the gpu driver for compute. What I described is the simplest way I know to do this for development purposes. For servers not needing to drive a display there are other options.

Thanks for the great explanation.

I’m currently doing CUDA programming on a Linux laptop, but am in the process of building a desktop machine for heavy lifting. I use bumblebee on my laptop, but I don’t think that will work on the desktop, nor would I want the complexity of it. So this is perfect.

I’ve come across two solutions to work on a Laptop:

1 - Use bumblebee with “optirun --no-xorg” (currently using).
2 - Switch GPU using nvidia-prime, but don’t log out and back in again. I haven’t actually tried this myself, but it should enable the NVIDIA GPU while keeping the display on Intel GPU.

I’ve struggled to understand exactly what nvidia-persistenced is used for. I was mainly concerned it would fight with bumblebeed for control of the driver module. But now, thanks to your explanation, I can see the use for it: keep the device active without any X clients or a complicated X configuration.

NVIDIA really should update prime to handle compute loads. It could have something like a compute only flag, to turn the GPU on. I wonder if you could do that by simply stopping and starting the persistenced service, though.

Of course, it would be nice to have full hot-switching of GPUs for both display and compute, but I don’t think there’s much NVIDIA can do about that right now. I doubt X11 is even capable of switching the display like that without crashing.

I agree that it’s time GPUs can be considered as purely compute devices. We shouldn’t have to deal with all the graphics baggage when trying to use them as such.

Oh, and one additional note about this setup. I find that Ubuntu updates tend to screw this config up on a regular basis. It starts using the software GL pipeline which is much slower. The easy way to check this is to launch the Settings > Details control panel and look at the Graphics config. If it doesn’t say Intel HD Graphics then you just need to uninstall and re-install the nvidia driver.

Hey this guide worked great, only issue is I can’t overclock the memory. Everything else works. Any suggestions?