ATI isn't interested in GPGPU

External Image okay that’s a bit of an exaggeration, but i don’t think they care as much as NVIDIA

anyway, i think once saw an article somewhere saying that ATI was more geared to graphical applications rather than general purpose use
I also remember they said they were going to wait and see if GPGPU really takes off before really investing in it

DOES ANYONE HAVE A LINK TO THIS ARTICLE? (if you’ve just seen it, plz let me know too, it let’s me know i’m not crazy)

ATI can do gpgpu.

However, the performance is another story.

Using large vector width and VLIW stream processors, ATI may not fully utilize its horsepower running applications other than 3D graphics.

NVIDIA is a more balance architecture, although with less peak performance.

well wait a second, ATI has got OpenCL and before they had CTM and brookGPU, i think they can accommodate general purpose use

perhaps what you’re trying to say is they can’t do it very well

If anyone knows about that ATI article plz let me know! External Image (these new icons are totally awesome!)

I don’t fully understand why are you insisting on that exact statement (that ATI/AMD “is not interested in GPGPU”) - I noticed your posts over there at GPGPU forum, and you already got proper reply there: even if the article you’re looking for exists, and even if it contains some kind of the statement that AMD/ATI was, at the moment of writing of the article, more interested in computer graphics related features of its hardware then to its usability for GPGPU work, still it would be very bold to claim ATI/AMD is not intersted in the GPGPU at all. Obviously, there is a big market for GPGPU application, and AMD/ATI is making some moves, with recent OpenCL capable releases of their SDK, to try to win at least a piece of this market; so - I’d say they are interested. The issue with them is that they’re late - NVIDIA is for almost 3 years on this market, so if you try to use their tools and hardware for GPGPU work, you’ll still encounter lots of small obstacles that are already ironed out in NVIDIA hardware and tools. Still, nobody knows what comes with the future: if they start listening what developers want, if they polish their tools, and if they start delivering hardware that is faster/cheaper/whatever than NVIDIA hardware, then certainly many among us would start to get interested in their offerings too. So - as already advised, if you’re really researching this because you’re into writing a thesis, then just skip making any kind of mumbo-jumbo marketing-derived claims, no matters what is the source of these.

You are remembering an article that did exist. The only problem is that it was a statement made by Nvidia about ATI. after the release of the 5XXX series.
and no ATI isnt near as strong as in GPGPU area as Nvidia. Until recently AMD/ATI had a bad cash flow issue and probably hasnt really had the resources to put into R&D. They are trying to compete with Nvidia in a different kind of way. I wouldnt be surprised to see something along the lines of the intel larrabee chip come out of the red teams camp. There is allot going on the we have only a small clue about as both camps tend to be fairly closed mouthed about the products in the R&D stage.

Having said all that, today, the fastest GPGPU machine in the TOP500 list (currently number 5 in the world) in an ATI machine - Tiahne-1

Umm, isn’t a fully qualified double precision LINPACK run required to be listed in the Top 500?

I thought that TFlops double precision is not supported by neither GPU - and that may only change with Fermi.

Yes, and the Radeon HD4870 they use can do double precision. I think that you will find they are using a similar approach to what NVIDIA took with the Tokyo Tech Tsubame machine and using a combination of overlapping GPU and CPU SGEMM to do the factorization.

You are right, but they also have 5120 quad core Xeons, as well as 2560 dual gpu cards. As a cluster, the efficiency number is actually pretty poor, they only report Rpeak/Rmax of around 0.5, which probably means they can do a lot better if they work on their code…

A side note, but it looks like the Australian CSIRO have just got themselves a reasonable sized S1070 based GPGPU cluster as described here. No LINPACK numbers (although it seems they are talking up the single precision performance more than the double precision capabilities of the machine).

thanks everyone for your replies

the purpose of this article was to support my simple explanation as to why nvidia is much more prevalent than ati for gpgpu (and also to give reason why i decided to start with nvidia over ati). Despite the article’s existence (thanks to ExtremeGrandpa for confirming), although being an nvidia claim, cgorac provided a very good explanation to what i wanted to know

@ avidday, hey that’s pretty good, i was only aware of the tesla machine in japan which is only ranked 56 in the world now

apologies for the title, it was meant to attract attention (and look i got 6 replies over night!)

For the Tianhe, Rmax is less than 50% of Rpeak.

Maybe it is caused by the RV770 architecture.

Maybe it is caused by something else, for example, sub-optimal math library, etc.

reference:

http://www.top500.org/list/2009/11/100

You may be thinking of David Kanter’s article where he uses a few sentences to muse whether or not NVIDIA is targetting GPGPU too much. However his impressions seem quite favorable.

But to see some conclusions from AMD themselves, you can look at their presentations. This is an ATI slide deck attacking Fermi because it’s too focused towards GPGPU.

Make sure to click forward through the next few slides. AMD basically says that it’s a waste to use die area for GPGPU double precision and ECC since you don’t need them for games, and therefore Fermi is a bad design.

The TSUBAME system with GT200s is number 56 in the same list and only gets 53% of peak.

THANK YOU External Image SPWorley!

that’s exactly what i was trying to get at! ATI is not focusing their hardware to gpgpu as much as nvidia and those attack slides provide evidence of them shunning advancements towards gpgpu (which is even better than the article)

fantastic, now how would i go ahead in sourcing this “AMD Confidential” set of slides??

(oh and now that i think about it, that article was before fermi and the 5000 series ati)

I think you’re overstating things there, even with the assumption that the slides are genuine. I believe that there is something of a tradition in the computer industry of rubbishing what your competitor is doing, just before announcing a similar product yourself. If ATI were to start promoting their hardware for GPU computing, the spotlight would immediately turn onto the miserable state of their software stack (CUDA is bleeding edge; OpenCL is sticking your arm in a wood chipper). Better to trash NVIDIA’s efforts for now, until the software engineers get the kinks worked out of OpenCL.

I’ll give you my quick take on GPGPU and what I feel is going down ,
NVIDIA is prepping Fermi to be the workhorse for the new industry as you know … and as a few people have said when its released it will put AMD a generation behind , this I’m not so sure about as everyone forgets about INTEL and Larrabee … My point is that Larrabee is based on x86 cores and they are going after the same market NVIDIA is trying to dominate with Fermi , AMD comes into this as well with there own x86 license and an architecture similar to INTEL’s Larrabee based on x86 cores ( so far unannounced but believe it, it will come its only logical )
My point is NVIDIA has no x86 license and they are going to get closed out of the market by the competing tech as it will be more mainstream with INTEL and AMD on there x86 based offerings.

Fermi is NVIDIA’s last hope to stay alive , if it fails so will they !

Thanks for such profound insight and analysis.

I’m in fact highly suspicious of Nikolai’s inflammatory remarks here.

I can’t help but sense the “Profound” sarcasm there External Media .

After all I said it was my quick take on it. External Media

yeah, i’ll concede to the possibility of it just being a case of competitors taking shots at each other External Image
perhaps we can all agree that ati plays a role in gpgpu, but as of right now, nvidia is ahead

i’m quite curious to see how nvidia deals without the x86 license