Primary GPU as CUDA Only?

Currently, I’m running a GTS 450 in my primary (and only) PCI-E x16 slot. I’ve got a Quadro NVS 295 (PCI-E x1) in a secondary slot (x8 physical slot) since my motherboard (Z8NA-D6C) only has one x16 physical slot. Long story short, I am really impressed with the stability of the Quadro NVS 295 and am thinking about purchasing another to drive my other display (currently driven by the GTS 450).

So, the configuration would be: 2x Quadro NVS 295 each driving a single 24" LCD display. The GTS 450 would only be used for CUDA computations.

I am confident in the performance of the Quadro NVS 295, and I do not game (this is a workstation). It is only used for driving the 1920x1200 24" LCD display. The most taxing thing it would do is play full screen video.

Does anyone have any thoughts on this setup. How does a GPU (not driving a display) handle CUDA? CUDA is already disabled on the NVS and would also be disabled on the new NVS.

The GTS 450 is a better card in every way, so you should use the GTS in the x16 slot becouse it may require more bandwidth. As you wrote, use the Quadros with your displays, and do ONLY CUDA on the GTS. And never use the same graphics card for both CUDA and your display, becouse it cuts down the computational power. Not that much, but if you take this whole thing seriously, it does matter. But the Quadro has the support for 2 displays so i guess you don’t even have to buy another one to use two displays, or am i wrong? :S

A little bit offtopic but a question came to mind.
If you use your only cuda card in init3(no X running) isn’t it the same as having a dedicated card for cuda?
I think it is (because you can run cuda-gdb for example) but it would be nice to confirm.
Thank you


Yes, it is the same. There is no watchdog, so kernels can run as long as you wish.

The reason behind running two GPU’s (one for each monitor) is because of a longstanding bug/hardware issue that has been present in nVidia hardware (or the way Windows interfaces with it) since the beginning of the 8000 series GPU’s. The issue that I’m referring to is the GPU accelerated interface rendering (Windows 7/Windows Vista) and video playback drop in frame rate that accompanies running multiple displays off of one GPU.

I haven’t been able to track down the specifics of the issue, but it does not seem to be related to compute power. I’ve gone through many GPU’s (GTS 200 series, GTX 200 series, GTX 470, Quadro FX, etc.) and all exhibit the same issue. As soon as I drop back to a 7950 GT, the problem completely goes away and I can drive two monitors off of a single GPU without a problem.

This problem transcends other hardware too. It’s happened on the last five or six machines I’ve built, with different monitors, motherboards, etc. The only thing common about the problem is that it happens on Windows (with a hardware rendered interface, like Aero) and that it happens on nVidia GPU’s within the 8000 series and newer.

The solution to the problem is simple: one GPU for one monitor. The problem completely disappears. It could be a Windows problem too, but the workaround I use works and helps keep nVidia in business, so I’m not going to worry about it. Disabling Aero fixes the issue as well.

In case anyone is more curious about the issue, the best way I can describe it is it’s like the interface becomes “laggy.” It’s the same issue that caused me to drop the Android platform in favor of the iOS platform.

When dragging windows around the screen, the windows tend to “skip” across the screen rather than flow. There’s a latency issue that appears as well.

Also, with video, frames can be dropped and simply not rendered. I guess that’s about the best I can say without actually demoing it for someone in the same room. Like I mentioned, this problem appeared on the Quadro FX 1800, on the GTX 470, on my current GTS 450 and it’s prior cousin the GTS 250, among others. Yet… my 7950 GT doesn’t exhibit this problem, go figure.