Kudos to Cuda and nvidia

I just want to say that this is very good news for many of us.
I had been testing CUDA for a few month now and I am really glad is finally released to teh public.
Now we can ask some important questions and shares ideas to help to spread the usage of this incredible technology.

Thanks god we do not have to map our algorithm to directx or opengl to render the physics anymore.

Thank to Nvidia for doing this for free. Kudos to Cuda
Julio Jerez

So the world can hope for Newton physics on 8800/CUDA?

Sounds cool.

Greetings from the Knax

now I am getting ready to implement CUDA into my physics library.
I am reading the programmers guide for the n time to make sure I understand well the architecture.
I must said the more I read the more I like it in many areas, and not so much on others.

I have two questions.
I read somewhere that it is possible to plug more than one card on the same machine, but the manual say that the cards must be identical.
I am confused, my motherboard have three PCI express slots, and I am wandering what would happen if a plug a g8600, or g8500 in the primary PCI express and a g8800 to another slot.
The reason I ask is that I am interested in using the card as a math co-processor only, for acceleration of all kind of physics simulation: rigid bodies, animation, rigid particles, sph fluids, cloth, etc

I read somewhere here in the forum that when the final CUDA SDK is released it will work on all g80 series of hardware but that for now only gtx8800 and gts8800 are supported. Is this correct?
Ideally what I want to do is to used one g8500 as a standalone co-processor, but I do not want to jeopardize my machine plugging two different cards and having catastrophic results.
For what I can see in the API, there is a function to queries the hardware, I do not see why they have to be identical.

My second question is about the design and marketing of the g8500 and g8600, for what I can see they are targeted to the mass consumer and entry level user.
I went to a store on the weekend and saw the g85 for $149.99 with 256 meg of ram.
I assume they do not have the bandwidth and the floating point throughput of the g88s
so I do not understand why it is that they are PCI express only, or at least I did not see any AGP versions.

The following is a speculation and a wish.
For what I understand there are more than 20 million PC in the US but I am willing to bet that more than half of those PCs are older generation from the PCI an AGP time.
I understand that the top of the line cards needs the high bandwidth, but I do not see how the PCI express bus will help on cards that are targeted to older PC.
On the other hand an AGP version of the same card will allow for so may users to update to better a graphics solution with the option of better overall performance on application that take advantage of the hardware for things other than graphics, and all for a very reasonable price. After all isnt that the reason CUDA was created?

Finally and this is a wish, why no make PCI version of the g8500, there are lots game companies and middleware developers that could get good used of hardware acceleration, yet not all these companies can afford expensive commercial licenses or do not use exclusive hardware solution, because they target their game to mass market. But if there was the possibility to use acceleration on low end hardware…, I do not know but I got the hunch that the sales will larger than anybody can possible imaging.
My impression is that CUDA was made to break the saturation of graphics hardware, but it is crippled by the bus selection for all versions, the way it is now is only usefully for high end users that will get then anyway.

I hope I am not out of line here speaking nonsence, as I usually are.
Thank you.

Yes, the next release of CUDA will support the GeForce 8500/8600.

We currently recommend that when using multiple GPUs for CUDA they all be of the same type, but there’s nothing to stop you using a 8500 for CUDA and a 8800 for rendering, for example.

I can’t comment on any plans for AGP versions of the GeForce 8 series. Personally I think people with old AGP systems are probably better off upgrading their motherboard and CPU before buying a new GPU.

I’m pretty sure we don’t have any plans for PCI graphics cards, the bus bandwidth would be too low to make them useful, even for physics.

P.S. Looking forward to seeing Newton CUDA-accelerated!

Thank your the reply.

I will get a g8500 right away, and in the mean time I will use g8800 for development of non graphics stuff and the g8500 as the graphic card. Latter after the official release I swap them.

My gold is to cover from lowest possible common denominator to all possible configurations.

I chose to take that in the positive way.

Just to reinforce my opinion, and this is just a comment so nto need to answer.

When the g60 and g70 series was made, anybody could upgrade machine because there were PCI express and AGP.

However with the 80 series it is not a simple update, most PCI express boards comes with only one or two PCI bus. Plus they requires a different power supply. So this is a much more expensive proposition.

I can see how an inexpensive PCI express card is attractive for OEM.

But on the other side there are lot of people that own quite advanced and expensive systems that are not PCI express. For those users getting a cheaper system is not an option.

I put myself as example.

In 2002 I bought the best system I could afford. Today that system is completely obsolete because it cannot take even a second generation CPU.

However that computer is still capable of running almost any application, so as an ordinary PC user I do not have any reason to get another computer, but as a developer I do.

My point is that there are many people like I that would not buy applications using this later technology if they are forced to get an entire new system. But if there was a way to update the machine then it is a different story.

Of course these comments are just my speculation I have no way of knowing if I am right of completely wrong.

Anyway I had run some experiments with the g8800, nothing big or extraordinary really. I am very happy with the hardware,

I’ll second that. This is a well thought-out product. I think it exposes the computing power of the GPU at just the right level of abstraction. It’s as “close to metal” ;) as necessary, but no closer.