Single GTX 750Ti card for driving two monitors and supporting general purpose computing?

I’m going to be upgrading some older workstations (DG965WH motherboard with an Intel Core 2 Duo 2.4 GHz E6600 processor and no graphics card) from a single monitor to dual monitors. The workstations are used for tasks such as decoding multiple MPEG2 video streams, multiple windows of electronic strip charts, and a variety of LabVIEW applications, with the mix of tasks varying from one workstation to the next and from time to time. My plan is to add a single GTX 750Ti card to each workstation, thinking that the single card, in addition to accomplishing the upgrade from single to dual monitors, will enable significant GPGPU support for applicable LabVIEW applications. Is this plan going to work? I’m aware there are advantages of using Tesla cards with TCC drivers rather than other Nvidia card with WDDM drivers, but Tesla cards are not in the budget. I aware there are advantages of using a dedicated graphics card for GPGPU, but I can only fit one one graphics card in the workstation, and as best as I’ve been able to determine so far, one of the tasks the single card is going to have to do is drive dual monitors. So, under these constraints, can the shared GTX 750Ti provide useful GPGPU while handling the other tasks, e.g. driving dual monitors? Will a registry edit be needed to increase the watchdog period (recommended value?) to avoid WDDM watchdog timeouts without unduly constraining CUDA programming? How does one use the system in a manner that tasks such as display updates and MEPEG2 decoding occur in a timely manner, with GPGPU adapting to take advantage of the fluctuating remaining GPU capacity. Or, am I off in the wrong direction pursuing a non-feasible “free lunch”? Thanks for any help you can provide.

If your kernels are using a lot of GPU resources and/or take a long time to execute, and you disable the TDR setting, you probably will get screen freezes during the time the kernels are running. Other than that, it should work fine.

Not sure if it’s possible for that board since it’s a bit older, but on newer boards you can still use the integrated graphics even though you have a discrete GPU. If that works, it means you wouldn’t have to worry about freezing, but you’d only be able to drive 1 screen with your integrated setup.

The solution is… assuming your motherboard supports it, you could use dual video cards… for example, you can drive your displays with a PCI-E x1 card like this (only 1 x16 slot on that motherboard, but also 1 PCI-E x1): ZOTAC GeForce GT 610 ZT-60607-10L Video Card - Newegg.com and tell your GPU compatible LabView code to execute on the GTX 750Ti.

The workstations are “2U” rack-mounted chassis with a riser card, so the PCI-E x1 slot is not accessible. I’m not going to be able to get two graphics cards into the chassis. I will explore the possibility of driving the monitors with the integrated graphics and a USB video adapter, and dedicating the graphics card to GPGPU, but I won’t be surprised if that approach doesn’t work.

An experienced input from the National Instruments LabVIEW community suggested that we’ll likely do OK with the TRD settings left at default given we are not doing massive amounts of computation, e.g. no weather modeling, no large system simulations, etc.

Any pointers to “instructions” for selecting and executing some sample GPGPU routines while a “baseline scenario” is running in order to observe the impact of GPGPU on the “baseline scenario”?