NVIDIA ignored the G92 design so it couldn’t shrink to 40nm. It diverted resources to the very behind-gt300 instead. Now, to cover up it’s mistake, it’s rebranding the 9800M gtx as a gtx 280M. Implying that they shrunk the gtx280 down to mobile. As any CUDA programmer will tell you, it just isn’t the same. not even close! It’s still compute 1.1 and still 128 cores.
NVIDIA, start executing better! Your marketing cannot make up for poor execution forever! You’re not hedging against design slips and market blips. A little risk mitigation goes a long way. Above all, DON’T LIE TO YOUR FANS! If you had a 40nm release right now, you’d rule the world. instead, you’ve pissed people off with misleading marketing and given AMD and Intel huge openings.
For every one person who feels lied to, 20 will hear the bad news. you’re on a role, don’t blow it with garbage like this!
It may be rebranded to some degree, but given the power requirements of the desktop GT200 line, it may be a while (even with die shrinks) before we see something like that in a notebook (even a really high-end gaming notebook).
Also, if you compare the specs between the 9800M GTX and the GTX 280M, you’ll see that the newer one has (somewhat) upgraded specs:
EDIT: Speaking of the power requirements, the lowest TDP for a double-precision card is 171 watts (from the Wikipedia article)…more evidence that it will probably be some time before we see anything like that in a notebook (perhaps after the next die shrink?)
dcbarton, I felt also crappy after I discovered that Quadro 5600 is actually 8800 Ultra and I can turn 9800GX2 into 4600x2. Also I can open smooth line features on lowest end 8400GS. I was really disappointed to see what I paid for when bought nearly topmost Quadro.
The 280 was a huge architecture leap in technology, way beyond a minor shrink. using the 280 name is utterly misleading and not by mistake. they know that they SHOULD be releasing a 280m. I had to talk a buddy out of it the other day, he thought it was a 280. He’s a gamer. Just because you CAN trick people doesn’t mean you SHOULD. Completely dishonest. they didn’t have to use the 280 name.
What about consumers? I know several who thought it was a 280 and were excited. This is the stuff of class-action lawsuits. I hope someone does to teach them a lesson. And I’ve been a huge supporter!
Is it unequivocally correct that the 280M and 260M are only cc 1.1 and not 1.3? I would really like to have a laptop CUDA environment but it MUST be double precision. Could not find it in 2.2beta prog guide just now.
Because he needs it? I doubt he wants it just to brag about it. Maybe he’s using double-precision desktop cards and wants to have a mobile demo machine or something…
And to answer the question, no…it’s not double-precision enabled.
mobile cards are always different to the desktop “equivelants” and are never the same card just made mobile. they always have less shaders and consume less power and have features stripped from them. the m at the end dictates this, that is how they can call it a 280, because they add an m showing its a different card.
you could apply the same argument to the 8800gt and 8800gtx, they are different spec but are still 8800. the gtx has 768mb ram on a 284bit bus but the gt has 512mb on a 256bit bus. the only difference is the x. argument closed, move along.
I think NV could really be doing a better job of branding the cards with respect to Cuda. As a developer it’s hard enough to work out which cards support certain features, and if we’re ever going to be able to target consumers with anything other than the base feature set we need to be able to make statements like ‘200 series and above’ with clarity and confidence or people are just going to get confused and move on.
Thanks for this - I won’t be getting a 260/280M laptop in that case. And your speculation as to why I want it is spot on. I want to be able to give demos/lectures on scientific computing, not all of which makes any sense with only single precision. Lugging a large desktop around is not really practical.