How disruptive is graphics to the performance of CUDA code?

I often need to benchmark the computation time of kernels that I write. I want to avoid having much noise during the benchmarking, since some of the results go into tech reports and papers.

Therefore, I’m using one NVIDIA GPU for computation, and I’m using a second (older) NVIDIA GPU to produce the graphics on my monitors. I’m not playing games or doing any intensive graphics; I’m just looking at documents and code on the monitors.

I’m thinking of running compute and graphics on one GPU instead of having the “one GPU for compute, one GPU for graphics” setup. Should I expect a graphics workload like this to make much of a change the performance of my CUDA code?

I did this in the past with a GTX 580 and did not notice a large impact on performance for simple desktop rendering (although I would disable any fancy desktop effects, such as compositing).

I would watch for the following though:

  • You will no longer be able to use cuda-gdb to pause execution
  • The watchdog timer may be problematic, depending on your OS/configuration. If the watchdog timer is enabled, it will kill any long-running kernels. If it is disabled, then your screen will freeze during long-running kernels.
  • You will likely not be able to run the full profiler. That tends to slow down kernels, which exacerbates the above issues.

I now use two GPUs and prefer that. You may be able to use the integrated video for display and the discrete GPU for CUDA. It is not generally seamless though: you will likely need to activate the integrated video via the BIOS and take measures so that discrete GPU is initialized (easier in Linux than Windows).

The main problem to me is the watchdog timer when you use the same GPU for CUDA and graphics.
Otherwise, on a mac computer running OS X I observed a decrease of performance compared to a standard Linux distribution caused by OSX reserving some part of the GPU for the UI.
Moreover I observed a freeze of my X server when I launched heavy kernels on my GTX470.

Why would you change your setup ? If you don’t play games or need a powerful GPU, you have the perfect setup to work.

Thanks for the feedback!

Until recently, my machine had 2 GPUs: C2050 (Fermi) for compute and an old GTX9800 (G92) for desktop graphics.

I just got a third GPU–a GTX680 (GK104). I’m now running 3 GPUs in my box, and it’s great to be able to test on both Kepler and Fermi without having to swap GPUs in and out. But, I’m a little bit concerned about cooling: my PCI slots are spaced such that the only option is to have the 3 GPUs touching each other. I was thinking of removing the GTX9800 and running graphics on the GTX680 instead. But, I’ve been keeping an eye on the temps (and they seem safe), so maybe I’ll just keep it this way.