CUDA same card as display


I’m new to CUDA and was just wondering - do most people use the same card for GPU computation as for their display, or do people usually have one card dedicated for the display and get another card for CUDA?

Right now I’m running into some display problems (artifacts even when only using terminal/firefox, chunks of my background wallpaper showing through my windows, bottom toolbar not rendering correctly) with my single card that is running the CUDA development drivers. It could very well be a voltage issue since the card is factory overclocked, but could it also be because of the CUDA driver?

I’m not sure if I should 1) replace the card, 2) buy another card and use this one exclusively for GPU computations, 3) try to underclock the card (which seems to be very difficult in Linux under the new fermi architecture)

Please let me know what your suggestions are ! Thanks

I personally have one graphics card for computing (GTX 260) and another for display (9500 GT).

I did have just one doing the computing and display though I also had similar artefacts and thought the card was overheating. I think it was because the graphics card memory wasn’t freed correctly, not a malfunction in the graphics card itself.

Also, I think that Windows will stop the CUDA kernel if it takes longer than 5 seconds to execute when the graphics card used for display is the same as the one used for CUDA.

When you want to debug, you need to either take off windows management (I use Ubuntu 10.04LTS so I take off X11 by sudo stop gdm), then debug using the virtual terminal (Ctrl+Alt+F1), or use a second card for display and a dedicated CUDA graphics card.

Thus, I would recommend option:

I don’t think the display card needs to be CUDA-enabled so you can just pick up a cheap one pretty easy.

When you get the additional graphics card, put the display one in the first pci-e slot and the CUDA one in the second. That’s how I’ve done it.

Hope this helps,


Except when developing on my laptop, I typically only use CUDA on devices which are not driving a display. They are either single-GPU headless nodes that I access over SSH, or multi-GPU nodes where I have a dedicated display device. (I can’t believe I’m using a GTX 295 as a display-only card now. 768 MB just isn’t enough anymore…)

Before doing something drastic, I would suggest that you try the latest NVIDIA drivers for your platform. There is nothing special about the CUDA development drivers; they are just the first driver release guaranteed to work with that CUDA version. After some time passes, the standard NVIDIA drivers also support the same CUDA release. For example, I’ve been doing development with the 280.x series of Linux drivers for a few weeks now, even though the CUDA 4.0 download page lists 270.41 as the “Developer Driver”.

Thanks for the advice - I’ll look into updating to the newest drivers as well as purchasing a cheaper card just for the display