Hardware to get started

I’m sorry to ask such basic questions, but I’m just getting started.

  1. Are the only CUDA enabled cards PCI-e? I know about nVidia’s list of products, but it would take a lot of clicks to look at each one. (Aside: I wish that vendors would provide tables showing differences between their various models.)

  2. Would I need two video cards – one for CUDA computations and one for the actual display? If not, does the CUDA computation take over the card and make weird patterns on the screen? I would appreciate someone pointing me to a good description of how this works.

I’m running 64-bit Ubuntu 9.04 on an AMD system I built myself. My Geforce FX5200 128MB card just died. I’m planning to replace it with an EVGA 6200 512 MB AGP. My motherboard does not have PCI-e slots.

I’m a retired CS Prof. and doing this for fun. I have no plans to commercialize my efforts. (The main perk of being retired is not being on a critical path. :-) ) My general plan to build another box with dual core Intel CPU and CUDA enabled GPU.

There are a few PCI G80 cards… you can search on newegg.com for them. They’re more expensive than the PCIe versions. There’s really only a couple left, with pretty abysmal specs.

Don’t bother buying such a PCI card. It’s better to buy a new modern machine, and also skip the old 6200. Yes, it depends very much on your budget, but it can be expensive to upgrade an old-school machine… it’s better to get a new machine with very low specs. Even a cheesy dualcore Intel system with say 4 gigs of RAM and a reasonably powerful GTX260 will likely only cost $500.

No you don’t need two cards, you can share compute and display. You’re not going to get weird patterns on the screen. (well, unless you do something Bad…)
The main problem is compute can BLOCK display, so if you have a slow kernel, your display will get clunky, and if you have a very slow kernel, your display will freeze until it’s done.
But you’re also right that the solution is a second card, then you’re free to do anything you like without worry.

The 6200 is on sale today for $30 at Fry’s. It’s worth that to me to have my machine working again. I’m not in a huge hurry to get CUDA going.

Thank you for your very quick response. You answered my questions and saved me a lot of time.

Now I can turn my attention to researching dual video card motherboards, etc. I’ve heard that there is talk of building a CUDA enabled card that doesn’t even have video output connectors. Meanwhile, my old university has a CUDA enabled server that I can play with.

Actually, they’ve already been released (twice even!). The “Tesla” series of cards are compute-only devices without video connectors. The Tesla C1060 card has 4GB of memory, doesn’t run any faster than a GTX 285, but costs 4x as much. The Tesla S1070 is four C1060 cards in a 1U rackmount case with cables to connect the CUDA devices to an adjacent computer (not included) through the PCI-Express bus.

If this is a side project for you, I would stick to the GeForce GTX 2xx series of cards. Best bang for the buck, for sure.