Need expert opinion on Quadro CX vs 2 GTX280 What tech is inside and performance


I wanted to speak to some experts in CUDA programming to help me make an informed buying decision. I’m trying to determine the raw computing power of the Quadro CX vs 2 GTX280s. The specifications seem to imply that 2 GTX280s would be “more powerful” with more ram and parallel processors.

I’m interested in both video editing and video game performance, and the video outputs included on the Quadro CX are more “pro” than I currently go. The sales videos on CUDA seem to imply that the GTX280 will also accelerate any CUDA enabled program, such as Adobe Premiere and 3rd party plugins.

So, the questions are:

Will 2 GTX280’s provide comparable computing power to a Quadro CX?

Will Adobe Premiere and After Effects use that power in a similar manner?



I haven’t heard about this Adobe CUDA product. I don’t know if NVIDIA somehow crippled the software to only work on a CX, but in general, a single GTX 280 would be more powerful than this Quadro. (Two 280s would also be more powerful, but note that it is not trivial to use two GPUs in parallel… unlike with SLI, CUDA software must be specially designed for multi GPU.)

Fantastic- Thanks!

As mentioned before the Quadro CX is parallel to the GTX 260 so it will have less computing power

I would like to ask in comparison to ATI graphic card is Nvidia better?

This forum is about CUDA, which does not run on ATI cards, so there is no comparison

ATI however supplies its own GPU programming environment.

It’s difficult to really compare them but you can compare hardware. I find that theoretical peak FLOPS are higher for ATI cards but theoretical peak bandwidth is higher for nVidia. Since nearly all kernels are bandwidth bound, one may guess nVidia cards are generally better in practical applications but that’s not really a valid comparison - we’re excluding way too many factors.

Personally, I stick to nVidia when it comes to GPGPU simply because the community is bigger and CUDA is more popular/well known (kudos to nV marketing I suppose ;) ) then AMD/ATI’s solution (I’ve had to spend five minutes just now to find out how it’s called!).

Not really. It tried to supply two different environments in the past, both of which failed. AMD/ATI is waiting for more standardization, such as via OpenCL, before jumping in again. It hasn’t invested heavily into software development the way NVIDIA has, which is understandable since a proprietary solution will eventually die, if not metamorphosed into a platform-independent one.