8800 vs 8600: CUDA differences?

Hello,

I am thinking of purchasing 8600 GTS as a platform for learning CUDA. Are there any differences beyond obvious (fewer multiprocessors, less memory, slower performance) between 8600 and 8800 series as far as CUDA and GPU programming go?

Thanks,

Kalju

8600 supports every feature 8800 supports, but the current public release of CUDA doesn’t support 8600.

We are planning a release candidate in May that includes Linux 64-bit support, support for 8600 GTS, 8600 GT, and 8500 GT, bug fixes, and additional features.

Cyril

Thanks for the info, Cyril. And, yes … the 64-bit Linux support soon is sweet.

Thanks for the information!

I was hoping to clarify in what sense you mean ‘not supported’ Do you mean that it will definitely not work for now on the 8600?, or it just not supported in the sense that on distributions other than RHEL, CUDA is not supported, but it still tends to work just fine.

“not supported” == “device not found”

thanks

I just got my 8600 a few hours ago, installed the card and the 158.19 Windows XP driver and apparently CUDA is working. My code actually runs in non-emulation mode. Of all of the SDK examples I tried the only one that didn’t work was fluidsD3D.

Have you tried with the official beta driver (97.73)?

Massimiliano

nope.

can I use 8800 GTS for CUDA ?

Yes

Has anyone tried the 8800GTS/OC (almost same clock speed as GTX, just 12 multiprocessors/320Mb)? (Inno3D in HK have one)

It works well on a normal GTS with 320Mb here, so I think it will work fine.

Thanks, wumpus - I was actually more interested if anyone has experience with OC cards as bit errors are much more critical in CUDA than when playing a game (why the Quadro cards are clocked slightly slower?). The company is ISO9001 so there is a good chance it is well tested. I don’t think it is pushing the chip as all the G80s no doubt come from the same die and those with a local fault have some fuses blown to isoloate the area and turn them into GTSs (+ different bonding). This raises the question of diagnostic QA software that was discussed in another thread… don’t think anything has appeared yet.
Eric

ed: Another option is the 8950 (it is spec’ed at 575/1800MHz, I believe) - anyone know if the 8950 looks like 1x24 multiprocs in CUDA or 2x12 (1 or 2 devices)?

If you are looking for specially certified cards, buy the Quadro. They are built by NVIDIA and not third party company and feature selected and tested memory chips. (and have more warranty than GF)

Peter

(no I am not getting payed by NVIDIA for writing this :) )

Also, the Quadro FX 5600 specifically is being put through more rigorous memory qualification testing, for exactly the reason Osiris specifies.

Mark

Yes and perhaps Quadro cards should use ECC memory, for serious CUDA. All our critical machines do and that would justify the price. Unfortunately one would take a significant performance hit due to slower memory speed (and only DDR2?).

I found it interesting that the 8600s appearing here are 675/2000MHz (4 multiprocessors & 128 bit bus) giving a device memory latency of 260 GPU clocks (8800GTX is 327) for a memory bound app with 100% utilisation and perfect coalescing.

Eric

Will the mobile G8600M and G8400M also supporting CUDA in future releases?

yours sincerely

Peter

Yes, the new GeForce 8M series will support CUDA, as will all future NVIDIA GPUs.

Will it be supported at the same update with 8600s? “sometime in may” as was posted by Cyril.

Just something I want to clarify, very roughly speaking, what mostly determine the processing speed of the cards? Of course the memory clock, amount, system configuration, bla bla will affect performance, but is it fair to say “cards with 128 stream processors should be almost 4x faster than cards with 32 stream processors”

TIA!