I want to build a cheaper cluster of two or three machines each one with 2 cuda cards, the purpose of this cluster is to write a parallel algorithm to work with N machines and each one with X cards.
What card do you suggest me to use? I was thiking about 8800 GTS, since they are pretty cheap (http://www.ebay.com/itm/EVGA-Corporation-NVIDIA-GeForce-8800-GTS-video-card-320-MB-/251072417155?pt=PCC_Video_TV_Cards&hash=item3a75150d83).
I agree, 8800 GTS are pretty cheap, that’s because they are really old cards (2005+ 2006+), with less features than GT200, Fermi or Kepler GPU, the next 3 architectures that followed G80 and G92!
I think you’d better use Fermi generation cards, for example GT 440, GTS 450 or GTX 550 if you look for $100+ gpu. They are largely faster than GTS 8800, even the GT 440!
Now that GT 640 is available I could firmly engage you to go the Kepler way, with GK107 DDR3 (or GDDR5 if you find one). Developing for Kepler will be mandatory, and something that is running fast on Kepler will run well on Fermi :)