It looks like a decent gaming part… taking the crown of fastest single card by a decent but not huge margin.
Most reviews do not concentrate on CUDA apps, but it’s clear for CUDA we’re seeing very good scaling as expected, even with no code changes.
Folding@Home in particular is about 180% the speed of a GTX285, which is right in line with the SP calculations.
Power and temperatures looks higher than we want, though.
Certainly for CUDA users, the GTX480 is going to be terrific. The gamers will like it, but it’s really great for us!
I reformatted the links assembled by DegustatoR on the B3D forum.
can’t wait to see how much difference the L1/L2 caches make for hydrodynamics and other things, how the clustering experiment this summer will work out and so on… could be very interesting indeed.
The Hexus review states that the GeForce cards limit double precision to 1/8 single precision throughput (vs. 1/2 documented for the architecture in general). Can anyone confirm this?
I’m still digging around the site, so I might have missed the detailed specs.
I’m amazed this release is getting as good a reaction as it is. For me, its a total disaster. Even the GTX 470 is going to cost twice what I was paying for the GTX 260 and yet the texture fill rate (which is basically the bottleneck for my application) is actually LOWER! I can see the L1/L2 cache being great for some applications (and I will probably do some investigation to see if I can afford to stop using the texture cache now) but it seems to come at a very high price.
Precisely. I would need to see something from NVidia. As much as such a move might make sense from a product segmentation standpoint, I’m not sure NVidia would do such a thing. However, if true, it would require me to wait for Tesla, so I’m trying to follow up before ordering something that could (possibly) be useless to me.
No confirmation, but I have already read a similar claim on another site after the deep dive event where the graphics details of the fermi architecture were revealed.
It would certainly be a disappointment if this is true, as at my job using Geforce for development and Tesla for deployment is quite a natural mix, but if this is true algorithms developed on a Geforce might be non-optimal on Tesla…
Yesterday I went to PAX EAST for the official GTX480/470 unveiling in Boston’s Hynes Convention Center. I would estimate there were 2000+ people in the theater for the NVIDIA presentation and many times that outside in the exhibit halls.
Since this was a gamer convention, the NVIDIA theater presentation made no explicit mention of CUDA but the crowd was very impressed with the 3D Vision Surround and the “Rocket Sled” PhysX demos. There wasn’t much talk of frame-rates, instead it was all about more resolution, more features and more realism.
NVIDIA, PNY and ZOTAC had plenty of demonstration machines all running at 1080p. I can verify that air coming out of the dual-GTX480 SLI boxes was quite warm. External Image
The NVIDIA booth did mention CUDA-powered products like BaDaBoom’s video transcoder.
I believe that NVIDIA may be able to ramp up marketing of CUDA/GPGPU to the mainstream as more consumer-focused applications start appearing (from the people in this forum!).
It’s a very exciting milestone for CUDA developers and partners!
On a sad note, I unfortunately did not win one of the two free GTX480’s that were given away. External Image
So considering that Furmark produces the most thermal stress for an OpenGL graphics application, would anyone know a CUDA equivalent to this? What kind of CUDA code uses the most power? I’d suspect it is compute bound code, but what kind of computation?
Same here. I’ll wait for mid range parts with the GF100 based core logic. Something in the 128-256 CUDA core category. I just hope they don’t cut out double precision entirely like they did with the GT220/GT240 parts.