Two graphics cards in Linux - how?

Now that I got the CUDA-stuff installed on my Ubuntu 12.04, two new questions popped up.

  1. How do I add another graphics card so that one runs X and the other is used for CUDA?

  2. How can I select the graphics card used for X at boot - and can I ?
    (And can I do the selection in Windows BCD-dualboot?)

#1 Just add the additional card, and CudaSetDevice() to the correct GPU, or use the enviroment variable CUDA_VISIBLE_DEVICES – to ignore the other GPU driving the display altogether.

#2 Depends on your motherboard, usually there is a primary slot that gets initialized first with video, or there might be an option in the BIOS to select a slot that gets initialized first (more unlikely now with newer boards) You probably won’t be able to dynamically switch cards if that’s what you’re asking. Sometimes depending on the board, if you connect a video input it might switch as active, but it’s not common, and usually determined by slot position in later motherboards.

On my system I need to often toggle between 2 CUDA cards with different SM versions. This is what I have in my Xorg config:

Section "Device"
    Identifier     "Device0"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce GTX 280"
    BusID          "PCI:5:0:0"
    #BusID          "PCI:3:0:0"

Note that this should be the only “Device” section in your config. “BoardName” does not really matter - just make sure you reference the BusID of the correct device.

And be sure to backup your xorg.config before making any changes :)


I wasn’t after dynamical switching, but I was wondering:

  • If both are enabled in BIOS, does Linux “automagically” use them both?
  • If one id disdabled from BIOS is the HW disabled altogether so that CUDA can’t access it either?
  • If I have only one card, is there a simple way to add a boot entry, like:

Ubuntu, with Linux 3.2.020-generic
Ubuntu, with Linux 3.2.020-generic (CUDA mode)
Ubuntu, with Linux 3.2.020-generic (recovery mode)

(1) It depends on what you mean by ‘use’. The primary card as determined by your motherboard will show video, and will be the card you can plug in display devices to. I’m not familiar if one would be able to use two cards to drive 2 or more different screens if that’s what you’re asking. If you mean CUDA-wise, both cards should be ‘available’ by default for use by CUDA programs… if deviceQuery sees it, it’s available for use. To use both at the same time requires corresponding logic in your CUDA code.

(2) If you’re able to disable a physical slot from your BIOS, then yes, the card in that slot will show up to the O/S, thus meaning it will be like it is ‘invisible’, I believe disabling a slot actually does just that, sets an option that physically does not send power to the slot. Even if you can’t disable the slot, the CUDA_VISIBLE_DEVICES environment variable lets you make a card ‘invisible’ as far as CUDA programs are concerned. If you run deviceQuery after setting that variable to hide a GPU, you’ll see it will not show up.

(3) Not sure what ‘CUDA mode’ implies here. If a CUDA compatible card is installed, and you’ve already configured the toolkit, CUDA will be available always. Again, if you want to selectively ‘disable’ cards from CUDA being used, just do it with the CUDA_VISIBLE_DEVICES environment variable.

As vacaloca is saying, there is very little difference between a CUDA-capable device running your display and a CUDA-capable device running CUDA programs. You can do both on the same card at the same time, with only a few drawbacks:

  • The GUI will use some of the device memory, so CUDA programs can’t use as much.
  • The device can’t update the GUI display while a CUDA kernel is running, so if you have long kernels, the display will be very jumpy. If the kernel goes past a few seconds, the driver watchdog will abort it to keep the display from freezing for too long.
  • There is some context switching overhead every time the card has to switch between running a CUDA kernel and updating the display, so if you are doing 3D rendering and running a CUDA program, you’ll see some slowdown.

To avoid these drawbacks, all you have to do is stop the server or configure it (as eugeneo showed above) to use a different device for the display. No BIOS or kernel fiddling required.

(Note that if you boot Linux with no server, the /dev/nvidia* entries used by CUDA programs might not be automatically created. The Linux release notes show what you have to add to your startup scripts to ensure that those device files are created at boot.)

“To avoid these drawbacks, all you have to do is stop the server or configure it (as eugeneo showed above) to use a different device for the display. No BIOS or kernel fiddling required.”

That’s what I mean with “CUDA mode”.

It doesn’t feel good to each time log in from alternate terminal and shut down X server.
It could as well boot to runlevel 3 (was it?).
But then - good bye all X-based tools, like geany or ddd (Do I need them?).

If I had both X and CUDA running in use, then I need two cards?

You don’t need two cards, but there are advantages to having two cards. I have worked with systems where CUDA runs directly on the same GPU as X. It was fine, since I didn’t mind the restrictions I listed above.

Then what happens to the X if you are using one card and debugging CUDA program with a breakpoint set?

Anyway, the booting to “CUDA mode” is solved: it’s easy to do with grub-customizer

Maybe someone else finds this interesting too. It can also be used if there is something wrong with X.
Just add an entry, copy the stuff from the default entry and change the boot parameters
“quiet splash” to “text”, and finally save. That’s all.

Hello there,

I have follow the instructions to start X on one of my Graphics cards. I have four card (two Titan Black, one Titan X and one GeForce 970).

My motherboard actually have two PCI Express 3.0 X16 so I want to put one of the Titan Black in one of these PCI Express and the Titan X on the other one.

I have change the configuration succesfully in the /etc/X11/xorg.conf but after doing so, I can log as normal into the system, the nvidia-smi command shows the X server running on the correct graphics card but when I tried to switch to a tty console, I havent any, it seems that none of them start during the boot.

Is there any way to solve this?¿?

Thanks for all in advance