As more and more of my process moves out of Java and into OpenCL, hopefully resulting in a Cloud implementation, I see the use of atomics as key. My memory requirements do not actually require a C1060 for development. I can use anything with 100mb. The 8800 needs to go though.
I am using Java concurrency to implement an OpenCL context/GPU pool. This can schedule 1000’s of iterations of kernel sets with large worksizes on multiple GPUs, so a multiple GPU test environment is what I am looking for. The GTX 295 looks like it, but rumor of a 40 nm version called the GTX 300 has me worried that my decision could look dumb a month from now. Is this just wishful thinking by some gamers?
I should also mention that I already have a multi-GPU MacbookPro, so I can development there for a while. Everything I have heard here is about the future with “ferni” . Spreading this rumor to here is not my intension. Can anyone shoot this thing down?