Hello everyone! :D
I am a begginer in CUDA, but I am pretty much interested on learning about parallel computing and GPGPU stuff.
Fot that reason, I’m gonna acquire a 8600gt so i can start working with that, but i’m not so sure how important memory bandwidth and size is in certain applications. I expect to run Monte Carlo simulations as well as ALU intensive tasks. No 3D modeling or video rendering will be used in this card.
I may get a 256MB GDDR3 clocked at 1.6Ghz (GPU @ 600Mhz) or a 512MB running at 800Mhz (GPU @ 540Mhz which, btw, can be easily oc’ed to 600Mhz i guess, as it comes up with active cooling). The first one is little more expensive.
If that matters somehow, I do care about how fast a VGA is, of course. But I care even more about desiring to “explore” the technology without being limited by insurmountable boundaries, like lack of memory.
If I need more speed, i can push it to the tops by tweaking or overclocking GPU, mem and Shaders. But if i need memory size, there will be nothing i can do. (Would CUDA share system memory if needed like a Turbo Cache? I know it must cause the application to run a way slower, but it might help breaking that “insurmountable boundarie” if needed…) Everytime i read a CUDA task related topic, i see the term “bandwidth”, that’s what makes me to be afraid of getting a GDDR2 video card.
So, what do you guys think? Should i go for the fastest or the “wider” one?
Thank you so much!