One GPU card for display and one for CUDA computing

I want to use a dual GPU cards. One card is for display and one is only dedicated for CUDA computing.
I chose these cards:
Display card: ASUS GeForce 210 512MB (around 50 SFr (~ $50)
CUDA computing card: MSI GTX 460 1Go (around 250 SFr)

Can someone tell me the validity/feasibility of the configuration and if someone have experienced this case?

Thanks.

There isn’t any reason I can think of why that won’t work. I use a similar setup on my development box, with X11 having a display on one card, and a second card dedicated for computation. the nvidia-smi utility is useful for making the display card unavailable for compute tasks.

Thank you for your help.

How do you configure the machine to recognize which card is for the display monitor and the other for CUDA?

May be an irrelevant question: The cards shouldn’t be installed in a SLI mode?

In the xorg configuration, you can specify the display card by PCI-e ID, something like this:

Section "Device"

    Identifier     "Device0"

    Driver         "nvidia"

    VendorName     "NVIDIA Corporation"

    BoardName      "GeForce GTX 275"

    BusID          "PCI:1:0:0"

    Option         "Coolbits" "1"

EndSection

and then delete any “Device” section for the compute card. There is a utility called nvidia-smi which can then be used to set the compute mode of each card, you can set the display card to “compute prohibited”, and the driver will not permit a CUDA context to be established on the device.

It is irelevant, both because CUDA doesn’t have anything to do with SLI, and because with two dissimilar GPUs you can’t use SLI anyway (not that the GT210 supports it).

Thanks a lot avidday.

I have a similar setup, except that my display card is very old and primitive, but still CUDA-capable.

A couple of minor additions to previous advises:

  1. Be sure to place your display card into the “primary” PCI slot, i.e. the one, which is involved in displaying the boot sequence stuff. Otherwise you’ll be seeing blank screen until X server starts up. This is rather inconvenient for debugging boot and hardware issues.

Unfortunately, my setup doesn’t allow that kind of a card placement, because the compute card wouldn’t fit into the “secondary” PCI slot due to the geometry of my enclosure. I hence have to switch the display cable back and forth every time some kind of a boot or hardware issue comes up.

  1. Regarding disabling CUDA on the display card: if your compute card is significantly more advanced than the display card, then a CUDA code would always choose to run on the compute card by default (i.e. when cudaSetDevice(…) is not invoked in your code). This happens even if you run several CUDA codes in parallel. So, I found the nvidia-smi step unnecessary in that case.