The Quadro FX 3800 has 192 “cores” and a 256-bit memory interface, compared to 2 x 240 cores and 2 x 448-bit memory for the GeForce GTX 295, which has two GPUs. So I would expect the GeForce to be faster, assuming the application can take advantage of it.
In that post and my opinion is…
" The GTX 280 have better spec. than Quadro FX 4800, although Quadro has better performance at graphic works."
And… I think, GAUSSIAN and GROMACS calculations are not graphical works, GTX could be better choice than Quadro .(aspects of performance and costs $$$)
I’d say that you’re better off with the GeForce from the price/performance point of view. Keep in mind that the GTX 295 has two cores. In order to make use of both of them the application you want to run on it has to have multi-GPU support. Otherwise, the GTX 285 is a good choice. On the other hand, Tesla is much better option then Quadro as it’s intended for computing especially if you want high-end hardware and/or you can make use of the 4 GB of main memory.
Disclaimer: I work in the research group where the main part of Gromacs development is concentrated and also contribute to OpenMM development.
To answer your second question, yes, there is a big restriction: neither GAUSSIAN (at least based on the version 09’s feature list) nor GROMACS doesn’t support GPU acceleration - at least not yet. Although OpenMM provides a modified version of Gromacs that runs on GPUs, the Gromacs-OpenMM 1.0 beta readme clearly states: “This is a preview release, and should be treated as prototype software.” On the other hand, we are planning to release an experimental version of GPU-accelerated Gromacs that uses OpenMM, so if you are interested, keep an eye on the Gromacs website. Note, that this will be quite far from a production-ready tool, for that you’ll have wait a bit more…