Tesla and Fermi cluster together...

I currently have a Tesla (S1070) cluster in production, and we’re going to expand it and thought of purchasing additional Fermis.
As far as I understand the current Tesla generation supports the IEEE 754-1985 spec for single precision. The new Fermi should support a newer version IEEE 754-2008.
What would happen if I have a mixed cluster of both Teslas and Fermis?
Will I get a different result for the same input data set (we do millions of single precision arithmetic operations) for Fermi and Tesla?


Yeah, you probably will. Fermi supports fused-multiply-add, GT200 doesn’t and only supports MAD (although it can emulate FMA at a major performance penalty). That means that Fermi has significantly improved precision versus GT200, but your results won’t line up unless you’re using the FMA intrinsics.

Thanks for the answer Tim.

So what is nVidia’s offical stand on how to handle this issue? What should I do with the “old” Tesla cluster?

I can’t take the performance hit just to keep the results the same and I do need the results to be the same.

What should I do? drop the Tesla cluster?



Well, question 1 is why you’re expecting single precision floating point operations to be the same across architectures. I wouldn’t necessarily trust them to be the same across Harpertown versus Nehalem, for instance, and things will certainly change again when going to Sandy Bridge which adds FMA support.

Not to mention that compiler upgrades (or even changing compiler options) will change the order of floating point operations on the same architecture, and give you different results as well.