GeForce 8400 GS, 9400 GT, 9500GT and Programming CUDA

Hi Everyone,

This post is related to a thread I posted in the hardware section [url=“http://forums.nvidia.com/index.php?showtopic=150400”]http://forums.nvidia.com/index.php?showtopic=150400[/url]

I’m just looking at the info on the Nvidia website about the GeForce cards. The 9500GT uses PCI but the website doesnt give any information related to using the card for your own parallel processing. Does anyone know what is involved in writing your own code for this card? Is it similar to the Tesla cards where you can just use the extension to the c language to create as many threads as you want?

Cheers,

Chris

All of those GPUs are CUDA compatible and use the same programming model as the Telsa cards. I would caution about getting a PCI version (they are very rare), however, because the speed of data transfer and latency of the host adapter bus is absolutely key to getting good performance. PCI is 60 times slower than PCI-e v2.0 and that will have an enormous impact on the performance of CUDA applications on these cards compared to their PCI-e equivalents.

Thanks for the info.

I see your point about the PCI bus being slower, however, if the code that I wish to execute needs very little data sent to it from the host PC then the slow PCI bus shouldn’t slow things down too much?

So basically I need to setup 3000 threads that need to execute as quickly as possible on the GPU, however I only need to send a few integers from the host PC to the GPU i.e. each thread will only need those same few integers to execute. And after the threads have executed I just need to send a few integers back to the host PC.

Another question, are these GeForce cards the best available PCI cards? Or is there other PCI cards that will give me better performance / have more processors?

Cheers

Chris