Price trends in C1060 & S1070 is it going to drop?

Hello External Image

With the coming of Fermi, I would assume C1060 & S1070 would be dropping in price. Or has it already? How much do you guys think it may drop?

I dont think NV will price FERMI at TESLA’s price now and then lower TESLA’s price… Why would they loose revenue?

Instead, I think they will price FERMI much higher because it is going to have limited consumption.

I don’t imagine prices falling at all. The GT200b is supposedly out of production (or at least the industry rags reported that NVIDIA were imposing a Dec 2009 cut off date for new orders for kits from their AIB customers), so there probably are not going to be any more wafers diffused at TSMC. Tight supply in the AIB channel has seen the majority of GT200 Geforce SKUs discontinued, and what there is isn’t falling in price at all. Prices for GT200 Geforces actually seem to have been rising in my part of the world recently.

Whatever Tesla bin GT200 chips NVIDIA has now is probably all they are going to have. GT200 boards will probably continue to be available for a while, both for continuity during the transition to Fermi based Tesla and as spares to support existing customer installations. But I don’t imagine the prices will drop.

oh geez, thanks for the replies

so c1060 is $1300, then c2050 is $2500 and then the next one c3050(?) will be $3500? holy cow, this monopoly thing is no good External Image

i wish nvidia would get some stiffer competition in the gpgpu department…

Nikolai,

You are right. But I hope CUDA programming model is open to all OR Is it NV proprietary?

If it is open, AMD could implement CUDA on their GPU… OpenCL, I think, is redundant… because in any case, it is so close to CUDA – Why not make CUDA programming model the computing standard? THat will solve this monopoly issue… In any case, I think 1 Terrflop for 2.5K is darn cheap… No?

in some cases yeah it’s pretty good, but in general, i think it’s easier to achieve peak flops on cpu then on gpu

and if your application on fermi is only 2x faster than $2.5k worth of workstation grade cpus, then it can be a little more difficult to get people to change their infrastructure (or at least part of it) to gpus

that would be nice… but i don’t think nvidia is looking to solve the monopoly issue

I just feel like a truck ran over me, reading about double-precision gamers Fermi…

I hope it is bad info:

[url=“http://www.evga.com/FORUMS/tm.aspx?m=136362”]http://www.evga.com/FORUMS/tm.aspx?m=136362[/url]

That is an interesting point of view. Apart from the obvious loss of bragging rights, will only getting twice the double precision performance of current cards really be a big issue for a self-confessed laymen like yourself? Given double precision isn’t a requirement for DX10.1 or DX11, what do you imagine using double precision for? Do you use the double precision capabilities of your current GT200 GPUs now?

To be honest, I don’t know exactly how many apps I have that include double-precision calculations, nor how many of the boys here, have included them in their CUDA apps.

I only know this is the first time I will be buying a GPU, that had the ability to run much faster, and was deactivated to keep me from running at a faster speed.

How many apps do you have that include double-precision calculations?

I wish I had more %'s…

I might be able to feel better about it.

The 200 series had no such limitation, nor the ability to run double-precision at Fermi speed.

I wanted to be on the cutting edge this time, but can’t afford Tesla.

The GPU Revolution is still the future, I however will be running at 1/4 the speed on double-precision, of what my GPU could actually do.

I just am having a hard time getting used to the idea. It’s a new one for me.

I still won’t switch camps… My bones are green.

oh that’s a low blow, 75% reduction in double precision on purpose? come on now

i wanted to get a gaming fermi for programming at home <img src=‘http://hqnveipbwb20/public/style_emoticons/<#EMO_DIR#>/crying.gif’ class=‘bbc_emoticon’ alt=‘:’(’ />

Rest assured, I will cry myself to sleep tonight.

Is the new FutureMark benchmark, going to be measuring our GPU’s double-precision calculation speed?
If so, I can only imagine how this will play out…

I hate to tell you this, but your current GPUs have that problem. In fact every Geforce released in the last 8 odd years has had that problem. NVIDIA only enable specific OpenGL accelerations on their Quadro boards with drivers that only load when cards that identify themselves as Quadro boards are present. There are a number of professional 3D applications that run slower on Geforce cards than on Quadro cards. Even when the GPU silicon itself is identical to the equivalent Geforce model. It seems this approach to product differentiation is now being extended to Tesla as well.

Just about all the code I develop has both single and double precision versions, but that is because my research demands it. But what I do and what you do with GPUs is rather different, which was why I asked.

Err, don’t we all?

Thanks for your response.

I guess the thing is to me, is I feel like 2010 is the year for GPU accelerated apps to shine.

I don’t know what we will be seeing released this year, and wanted to be ready for anything that might come our way.

Maybe the commercial apps that get released to us commoners, won’t even be looking for our GPU’s to calculate double-precision?

I wish I knew…

On page 9 of the new White Paper posted, there is a charming graph: (Bottom right corner of the page.)

http://www.nvidia.co.uk/object/gf100_uk.html

Not sure if that data is only for Tesla now. If it is, it doesn’t say…

Given that the vast majority of CUDA capable devices “in the wild” can’t do double precision, I am pretty sure than answer is no. Further to that, there are really only certain classes of scientific and engineering problems that demand double precision anyway. Single precision arithmetic is “good enough” for most applications. Passenger vehicle crash testing is done in single precision, for example.

I don’t know what is so charming about the graph. It shows that if you are in the business of doing double precision linear algebra on your GPU, you can get 4x speed up over the GT200. I am in that business, and it tells me that my problems that currently take 2 weeks to solve now might only take 4 days. If you aren’t. it is irrelevant.

The key is in the title of the document. Being a “compute” whitepaper, I am pretty sure it applies to Tesla only. You will note there is also a “graphics” whitepaper, which mentions double precision arithmetic only once. Different markets, different performance emphasis, vastly different price points, and (it seems) different card capabilities. Whether it makes a jot of difference really depends on what you use the card for. Personally, I would very much like to be able to buy Tesla level double precision performance for $500. If I can’t I will spend 4 or 5 times more and buy Tesla because I can see why I need it and, even at that price, it is still the least expensive way to get the performance I require to solve the next generation of problems I will be working on.

Physics simulation would be one such example, as heavily used in robotics and artificial evolution.

Also with ref to the start of this thread, the new Fermi cards won’t be much more than the current tesla cards in price, the following prices are confirmed by NVIDIA and include a 35% educational discount, so add 35% if you’re not a University. and even if you are add 10% if you don’t order by 31st Jan

Buy a Tesla C1060 for £530+VAT and Pre-Order the following before 31st January:

Tesla C2050 for £1,020+VAT

Tesla C2070 for £1,640+VAT

Buy a Tesla S1070-400 for £3,870+VAT and Pre-Order the following before 31st January:

Tesla S2050 for £6,260+VAT

Tesla S2070 for £9,110+VAT