I’m aiming at buying a Tesla C1060 and put together a stand-alone computer for the sole purpose of running computations. I figure I’ll need the C1060 for the 4 GB memory as my computations use a lot of memory (I now run them using CPUs). I’m just starting out with CUDA so I’ve got a lot to learn.
I’m curious, what CPU (speed, multi-core), RAM (speed/size) and motherboard (speed, components) do I need to get to have optimal computational performance of the Tesla/GPU? In case I need to do a lot of round trips to and from the GPU versus rarely? If I’m able to run all computations on the GPU, does it even matter what CPU and RAM I use as long as I got a PCIe slot? I want to have the best performance without any “overkill” hardware.