Questions about Performance GTX 295, FX 3800, GROMACS, GAUSSIAN03

I’m interesting about M.D simulation and Gaussian (Quantum Chemistry) calculation using CUDA.

But, there are two questions about using CUDA.

1. Graphic Card. (GTX 295 vs Quadro FX3800)

Which one could show better performance, approximately ? (In case, using CUDA processing)

And,…is there are anything special reasons for choose the Quadro ?

Simple specifications are below.


[GTX 295]
576 MHz, 1.8GB(DDR3), 896 bit 1988MHz
$800

[FX 3800]
1GB (DDR3), 256 bit
$1100


2. Is there any restriction using GROMACS,GAUSSIAN03 with CUDA? (for example, no explicit water, only simple moelcule…etc)

Best regards.

The Quadro FX 3800 has 192 “cores” and a 256-bit memory interface, compared to 2 x 240 cores and 2 x 448-bit memory for the GeForce GTX 295, which has two GPUs. So I would expect the GeForce to be faster, assuming the application can take advantage of it.

However, Quadro does have other advantages, especially for graphics applications, this document explains the differences well:
http://www.nvidia.com/object/quadro_geforce.html

I don’t know much about Gromacs etc., you might want to try asking the application developers.

Thank you for your reply!.

And I found a post " Quadro FX 4800 vs GTX 280 ".
(http://www.tomshardware.com/reviews/quadro-fx-4800,2258-10.html)

In that post and my opinion is…
" The GTX 280 have better spec. than Quadro FX 4800, although Quadro has better performance at graphic works."

And… I think, GAUSSIAN and GROMACS calculations are not graphical works, GTX could be better choice than Quadro .(aspects of performance and costs $$$)

Hi yoochan!

I’d say that you’re better off with the GeForce from the price/performance point of view. Keep in mind that the GTX 295 has two cores. In order to make use of both of them the application you want to run on it has to have multi-GPU support. Otherwise, the GTX 285 is a good choice. On the other hand, Tesla is much better option then Quadro as it’s intended for computing especially if you want high-end hardware and/or you can make use of the 4 GB of main memory.

Disclaimer: I work in the research group where the main part of Gromacs development is concentrated and also contribute to OpenMM development.

To answer your second question, yes, there is a big restriction: neither GAUSSIAN (at least based on the version 09’s feature list) nor GROMACS doesn’t support GPU acceleration - at least not yet. Although OpenMM provides a modified version of Gromacs that runs on GPUs, the Gromacs-OpenMM 1.0 beta readme clearly states: “This is a preview release, and should be treated as prototype software.” On the other hand, we are planning to release an experimental version of GPU-accelerated Gromacs that uses OpenMM, so if you are interested, keep an eye on the Gromacs website. Note, that this will be quite far from a production-ready tool, for that you’ll have wait a bit more…