Oops, I forgot to take a picture of the note outside my cubicle that says “GF100 Appreciation Station” where my name should be. Oh well.
Although the gamer review sites seem to dislike the GTX 470, that looks like an excellent “bang-for-the-buck” part to test out the new architecture.
Now to wait for these things to appear in the retail channels… Got test kernels to run!
I see it’s even not too bad for games… :-)
can’t wait to see how much difference the L1/L2 caches make for hydrodynamics and other things, how the clustering experiment this summer will work out and so on… could be very interesting indeed.
The Hexus review states that the GeForce cards limit double precision to 1/8 single precision throughput (vs. 1/2 documented for the architecture in general). Can anyone confirm this?
I’m still digging around the site, so I might have missed the detailed specs.
I’m amazed this release is getting as good a reaction as it is. For me, its a total disaster. Even the GTX 470 is going to cost twice what I was paying for the GTX 260 and yet the texture fill rate (which is basically the bottleneck for my application) is actually LOWER! I can see the L1/L2 cache being great for some applications (and I will probably do some investigation to see if I can afford to stop using the texture cache now) but it seems to come at a very high price.
OTOH, they are not presenting any evidence whatsoever in support of their claims.
Precisely. I would need to see something from NVidia. As much as such a move might make sense from a product segmentation standpoint, I’m not sure NVidia would do such a thing. However, if true, it would require me to wait for Tesla, so I’m trying to follow up before ordering something that could (possibly) be useless to me.
Nvidia have on numerous occasions, in white papers and at conferences - stated that the Fermi architecture supports DP at half the rate of SP.
Yes I’m also very excited to see how some of my apps will scale with the new caches!
Is there anyone who can post any interesting cuda apps with fermi at this time? ( except NV people i guess not…)
No confirmation, but I have already read a similar claim on another site after the deep dive event where the graphics details of the fermi architecture were revealed.
It would certainly be a disappointment if this is true, as at my job using Geforce for development and Tesla for deployment is quite a natural mix, but if this is true algorithms developed on a Geforce might be non-optimal on Tesla…
Yesterday I went to PAX EAST for the official GTX480/470 unveiling in Boston’s Hynes Convention Center. I would estimate there were 2000+ people in the theater for the NVIDIA presentation and many times that outside in the exhibit halls.
Since this was a gamer convention, the NVIDIA theater presentation made no explicit mention of CUDA but the crowd was very impressed with the 3D Vision Surround and the “Rocket Sled” PhysX demos. There wasn’t much talk of frame-rates, instead it was all about more resolution, more features and more realism.
NVIDIA, PNY and ZOTAC had plenty of demonstration machines all running at 1080p. I can verify that air coming out of the dual-GTX480 SLI boxes was quite warm.
The NVIDIA booth did mention CUDA-powered products like BaDaBoom’s video transcoder.
I believe that NVIDIA may be able to ramp up marketing of CUDA/GPGPU to the mainstream as more consumer-focused applications start appearing (from the people in this forum!).
It’s a very exciting milestone for CUDA developers and partners!
On a sad note, I unfortunately did not win one of the two free GTX480’s that were given away.
uh, oh. Don’t ever run Furmark on this beast, or consider buying some earplugs first. ;)
So considering that Furmark produces the most thermal stress for an OpenGL graphics application, would anyone know a CUDA equivalent to this? What kind of CUDA code uses the most power? I’d suspect it is compute bound code, but what kind of computation?
Look at another thread in this forum. While the Fermi architecture “supports” 1/2 DP, evidently the consumer cards are purposely hobbled to 1/8 DP.
Yes, I noticed that. So there will be no 4xx for me then.
Same here. I’ll wait for mid range parts with the GF100 based core logic. Something in the 128-256 CUDA core category. I just hope they don’t cut out double precision entirely like they did with the GT220/GT240 parts.
Unlike before, there isn’t an obvious hole in the compute capability numbering for a Fermi-without-double architecture. Hopefully that’s a good sign…