I am a sysadmin who was asked to set up a lab full of machines with Nvidia GeForce GTX 480 cards to be used for GPU computing under Linux (we will be replacing older video cards in existing workstations). We are worried that the existing power supply may not be able to cope with the power draw of the new cards, when they are maxed out, and need to test this. Our test is to measure the workstation’s power consumption from the mains and multiply this by 75% (a typical power supply efficiency).
So far we’ve run some OpenGL benchmarking software, and the machine’s power draw never exceeded 315W. But I would like to be able to load the cards with a CUDA stress test of some sort, in order to measure the power draw during a more relevant use case.
I could not find any GPU computational benchmarking suites. Are there any?
Is there an easy way to compile a CUDA program that would load every single core of the GPU with some computational task for about an hour? I don’t need any statistics in terms of terraflops, just to max out the card.
I’d appreciate any pointer or code snippet you could send my way.
For what it’s worth, the main CPU is the Core 2 Duo E6550 (2.33GHz), and there is 2GB of system RAM.
Thanks!