GeForce 210 for $20 after rebate Perfect for display-only GPU when doing CUDA development

When you are doing CUDA computation, it is almost always very relaxing and convenient to have a display-only GPU so your “real” GPU can do work without worrying about laggy display or kernel timeout hassles. It is also required when running the Nexus debugger on a single machine. You want your display GPU to be an NVidia board so there’s no driver issues. But you also want it to be low power so it doesn’t add heat or power issues.

So an ideal companion is a simple GT210 board.

Well, you can pick one up for only $20 after a rebate.
http://www.frys.com/product/6085648?site=s…CH:MAIN_RSLT_PG

From experience, it is always useful to have an extra display-only GPU sitting on your shelf, even if you don’t need it right now.

I realize this isn’t the “hot internet shopping dealz” forum, but some of them are too good not to share, and it’s a good opportunity to remind people how nice display-only GPUs are for development.

The first thing I wanted to do when I got my CUDA-enabled GPU was to try to set it up for computation only.
I want to let the motherboard integrated GPU handle all the graphics. Of course after installing the GPU card, the motherboard does not output a video signal.

If there is tutorial to set up a graphics-only GPU, I’d like to see it.
Please help me set up this configuration.
Thanks!

What Operating System are you running.

If Windows 7, simply plug both GPU’s in, insuring that the $20 dollar card is used as your main display. (Slot 1 on your Mobo would be best.)

Depending on what CUDA enabled GPU your using for your computation only GPU, will determine what drivers you need.

I run a single 295 in SLI mode as my display GPU, and A 280 in dedicated PhysX mode.

It is my current belief that even though my 280 is operating in dedicated PhysX mode, when running a CUDA app with no PhysX, the system considers my 280 to be in computation mode only, and is running as fast as it can.

You just need to take all video responsibilities off your CUDA enabled GPU, and you should be golden.

(Needless to say, the CUDA app must also be programmed to look for, and utilize more than 1 GPU.)

I find this to be a good idea, and a excellent GPU at the right price, for a computation rig.

OS: Windows 7
Just to be clear, I don’t have the $20 GPU. I only have ONE graphics card: GeForce GT 240. I want to use the INTEGRATED graphics on my motherboard to handle graphics and my GeForce GT 240 to handle computation only.
How do I set this up? Thanks for the help!

I still don’t know the answer to that myself… ;)

I also believe when you plug a GPU into your mobo, the built in mobo’s video is deactivated.

I think the $20 card would be the best answer.

Intel’s recent integrated chipsets work like that (not sure what Clarkdale will do but I guess it will be the same), but NVIDIAs and ATI/AMDs do not. You can (at least under linux) run an NVIDIA card in a PCI-e slot for CUDA, and use an AMD integrated GPU to drive a display. On WDDM Windows versions (so Vista and 7) it should also work, although I have not tried it. On XP and its derivatives, it can’t work because the OS only supports a single active VGA driver at a time.

I have a dual-vendor machine running Windows 7:

http://forums.nvidia.com/index.php?showtopic=156878

I plugged in my GPU into my AMD motherboard, and the motherboard video automatically deactivated.

Please explain how to set it up on Windows 7.

In most 780G/785G boards I have used, you can set the primary display adapter in the BIOS. If you set it to the built in, then both display adapters should probably get enumerated by the operating system (the PCI-e x16 slot probably only with 8 lanes). As I said earlier, I can’t tell you what to do with Windows 7. Profquail indicates he has gotten both AMD and NVIDIA drivers to coexist in Windows 7, so it should be possible.

This fighting, confusion, and driver hassle is exactly the kind of stress and annoyance a $20 display board avoids.

Another deal: if you’re looking for a single slot, compute 1.2 card, the most powerful you can get is a GT240. Those are only $50 (after rebate).

I use these in a couple machines too and they’re quite nice… silent, cool, no extra power connectors needed. About 50% the speed of a GTX285 (in my apps anyway).

I do not see GT240 on the list in “General Specifications” of CUDA Programming Guide. Are you certain it is 1.2 or 1.3 capable?
About a year ago when I was buying my computer the cheapest 1.3 capable card is GTX260 and I have been succesfully using it since then.

I have two GT240 cards.

Yes, the GT240 is compute 1.2. This means it has G200’s doubled register count, shared atomics, zero-copy memory support, and flexible coalescing. It does not have double precision support… that’s the sole difference between compute 1.2 and 1.3.

Some of us don’t have the luxury of extra PCI slots. Ever think of that? If it were as easy as buying an extra $20 card, I wouldn’t be on this forum trying to find a solution, would I?

Profquail, you seem to know how to re-enable the on-board video. I couldn’t find any such option in the bios. I guess I’m out of luck. Any suggestions?

What board and chipset is this?

I didn’t re-enable the on-board video. I’ve got two PCIe slots, and I put an nVidia card in one and an ATI card in the other.

If you can find a way to re-enable the on-board video, you’ll also need to make sure that there’s a WDDM 1.1 driver for it. Windows 7 is backwards-compatible with the Vista driver model (WDDM 1.0), but that doesn’t support loading multiple GPU drivers simultaneously.

If you can’t find the option in the BIOS, there might not be a way to enable it (perhaps the motherboard simply doesn’t support using both video devices at once due to a hardware issue). You could also try contacting the manufacturer directly to see if they know of any workaround for it (there might be a newer BIOS version or something that enables that option).

Sorry for my frustration, and thank you for posting such a good deal here.

If I’m angry, it’s only at the motherboard manufacturer for such a crappy design: automatic hard deactivation of the onboard video after plugging in a gpu…and no extra pci slots.

It’s indeed quite bad.

There are two possibilities … use a PCI graphics card (Geforce 9500) or a PCIe x1 graphics card (also a Geforce 9500) or to buy a new mainboard (not always possible).

I couldn’t agree more. BTW, we don’t pay SPWorley to post this kind of stuff, he comes up with it himself :)

You could send over a Fermi c2050 or two if you’d like…

-Steve