You can use the same card for CUDA and video output. There are only two things to consider:
Rendering extensive graphics when your CUDA program is running will cause both to slow down, obviously. Not a big issue with just a desktop or a window GUI.
There are watchdog timers enabled on cards connected to displays in Windows XP/Vista. Those shutdown any kernel running on the GPU for more than a couple of seconds (they figure it’s a infinite loop or other botch and kill it for your safety). This isn’t usually a problem since the vast majority of all CUDA kernels execute in the order of miliseconds. In fact, it does sort of guard against programming mistakes.
Many people around here use the same card for CUDA and display.
As for dual GPUs or SLI configs, you can direct both cards to do computing but:
- you have to explicitly manage work
- you need to have at least one CPU core per GPU for reasonable performance (a separate CPU thread to manage each card)
Two GPUs can process different parts of the same task. You should note that there’s no direct communication between cards. If you need to synchronize data among many GPUs, you have to copy it back to the CPU, sync there and upload to GPUs again. This frequent copying back and forth can be a slowdown because otherwise you can leave your data laying in the GPU DRAM between kernel calls.
There are many people here doing this as well.