FASTRA II More video geekporn

These guys in Belgium have some ideas on how to use 7 Geforce cards in one desktop computer used for 3D X-ray imaging. The system isn’t totally stable yet though, could it be that it is too hot?

[url=“http://fastra2.ua.ac.be/”]http://fastra2.ua.ac.be/[/url]

I want it for christmas! :)

Wow, great to see them pushing the envelope. The original quad 9800GX2 really inspired a lot of high GPU count systems. (Certainly I never would have contemplated a quad GTX 295 system without seeing FASTRA and Manifold’s successes.)

A pretty brave move. I will be really interested to see what sort of performance they get out of it for problems that require a lot of PCI-e transactions (if their apps do). I wouldn’t be that confident a single IOAPIC can really handle the volume of interrupts they can potentially generate gracefully.

My 2 GTX 260 really keep my office warm when running Folding@Home. When I add the 9600GSO to fold as well, my 750W power supply also folds External Media

So what does this machine do to an office space? Turns it into a sauna?

EDIT: this 3 page article in German has details, stating the actual power consumption to be 1200 watts under load. Surprisingly low IMHO.

http://www.golem.de/0912/71893.html

Christian

Yeah, I don’t see how that could be possible, unless the duty cycle on the cards is less than 100%. I hit 1100W with four GTX 295s at max load.

Power consumption under “max load” can vary a lot from app to app.

Also I really hope somebody decides to build one of these computers in a giant metal barrel.

Huh, do you mean the mix of floating point and memory load instructions can have an observable effect on power consumption? Do memory bound kernels use less power? :)

Yes, a metric ton of switching transistors in a bunch of multipliers produce more heat than an idling memory controller.

edit: FurMark is an OpenGL 2.0 stress test and benchmark for Windows and uses fur rendering algorithms to overheat the graphics card.

http://www.opengl.org/news/comments/furmar…-gpu-z-support/

A prime example of what the GPU Revolution is all about…

More GPU’s = faster performance.

Go CUDA!

Now we need 64 bit BIOS’es standard now, to support more GPU’s.

We also need mondo size mobo’s with 10 GPU slots spaced far apart… :)

A package containing the custom BIOS from ASUS and a Linux kernel patch from the FASTRA team is now available at [url=“http://fastra2.ua.ac.be/?page_id=628”]http://fastra2.ua.ac.be/?page_id=628[/url] .

The BIOS skips initialization and memory address allocation of certain cards (9800GX2, GTX295, …) and postpones this task for handling by the modified 64-bit Linux kernel, which can allocate memory ranges above 4GB.

Have you seen that: [url=“http://www.carri.fr/html/config_complete/config.php?id=125120”]http://www.carri.fr/html/config_complete/c...g.php?id=125120[/url]

8 Tesla C1060 in a 4U rack. 32GB of GPU memory sweet

Actually, on a GPU, memory-bound kernels use more power. Most transistors are devoted to routing, reordering and moving memory requests all over the chip, wires consume a lot compared with transistors in modern processes, plus you need to power the off-chip traces and DRAM chips…

Another funny fact I found is that a kernel doing only register-register MOVs consumes more power than one that does only MADs.

Because MOVs can be issued every 2 clocks, vs. 4 clocks for MADs. Even mixed MAD+MUL does not consume significantly more than MOVs. (The energy/instruction metric makes much more sense than power here.)

Even with wide SIMD units, the power consumed by arithmetic units is only a small fraction of the total power. The current challenge for computer architects is designing low-power interconnect networks for manycore chips…

For an overview of power issues on chips, have a read of the DARPA report into Exascale computing. Short version: building an exaflop’s worth of FPUs is easy and doable without facing any unsolved power supply issues. But you want those FPUs to fetch data from an L1 cache? Bring your own nuclear power station. And you want conventional main memory banks too? You had better really not care about your electricity bill.

Could someone tell me if a setup like that could run F@H on each GPU? I am sorry to disturb the flow of this thread but I have no one else to ask.
I have seen folding farms running 4 GTX295’s running 8 instances of F@H . But would it be possible to use the amount of GPU power they have in that Moster to run F@H or would it be a waste of money. if you can not utilize each GPU in the sytem to its fullest then whats the use? i am slightly out of my area of computer experience here. I am willing to listen and learn, and have a desire to do so. i have built high end gaming rigs for awhile now, and just recently realized how much computing power I had sitting here just playing games. F@H was something usefull I could do with that wasted power.

So if anyone can help me out here it would be greatly appreciated and I realize many may not know what F@H is but maybe someone does and could help me.

Probably not without extra fans attached or the standard air based cooling replaced by liquid cooling.

With their particular app (which I suppose is mostly compute bound) they seem to be using the machine fully. They do not max out the thermal design power of the chips, but that’s a different issue.

The difference between letting your machine sit idle on a Windows desktop and running F@H could easily triple your power consumption. So thinking about running F@H to not “waste power” is bad logic. Switching the machine off when not needed would be the logical thing to do, unless you really really want to contribute to ongoing research (which F@H actually is).

Sory for my poor choice of wording. what I ment was they dont get put to any use other than my own personal recreation. I understand that folding uses power and lots of it. I have 3 cpu’s folding and several GPUs 24/7. But my main interest is the theroretical output of a unit such as this in F@H. Running protein simulations uses alot of power and I have plans to build 2 folding farms next year. Each mobo with 4 GTX 295’s. But if i can get more GPUs onto a single mobo and utilize all the power the GPUs have to offer F@H simulations, that would save on costs of mobo, cpu and a other small hardware. Thus allowing more money to be spent on GPU’s