I know this is rather a question about company politics, but I a am sure a lot of people are also very curious about double precision support, since a lot of calculations require double precision floating point numbers. Combining this with the fact that CUDA, unlike ATI’s CTM solution, works with every common GeForce8800 card things could get REALLY interesting!
Are there any plans for this yet? Maybe as a part of the GeForce9 series?
Hi Jayson,
from the NVIDIA CUDA Release Notes Version 0.8 file:
Q: Does CUDA support Double Precision Floating Point arithmetic?
A: CUDA supports the C “double” data type. However on G80
(e.g. GeForce 8800) GPUs, these types will get demoted to 32-bit
floats. NVIDIA GPUs supporting double precision in hardware will
become available in late 2007.
Thanks for your reply. This is just awesome! :magic:
I am really looking forward to these cards (and maybe the first GPU accelerated raytracing render-engines - thanks to CUDA External Media)!
So can one assume that those next generation GeForce chips (i.e. 9xxx or whatever) will all support double precision floating point arithmetic (64-bit
floats)?, or will that be limited to the high-end SLI GPUs?
Then I think that this initial CUDA was a good start to match ATI’s ‘Close-to-the-Metal’ (ATI/AMD CTM™) solution, but real double precision support in hardware that will be revolutionary when it becomes available through CUDA for SLI in future GPUs.
All this makes me very curious about what NVIDIA (and ATI) will come out with next; ATI/AMD has already announced their “Fusion” chip (combined CPU and GPU in one package/chip), I wonder if NVIDIA will come up with something as inovate as well, like maybe a combined APU (Audio Processor Unit) and GPU in one package/chip, or a relativly cheap generic ‘Stream Processor’ PCIe-card for home and office use (game physics and video encoding for home and CAD/CAM for work, etc.)? …at the very least I hope that NVIDIA (and ATI) will start to manufactur inexpensive dual-core and quad-core GPU chips for home/gamer-usage, multi-core GPUs that also future GPGPU applications can take advantage of.
two cores/functions are better than one External Image my 2 cents
G80 is very big. R600 will also be big iron. So where do you take die space for some legacy general purpose CPUs?
I think “Fusion” is the idea of marketing/management. It sounds silly.
But a faster connection than PEG is needed. Xbox360 and PS3 are interesting designs. But it will take PC architecture years to get there.
No one stops you implementing audio stuff on the GPU with CUDA. So what
functionality should an DSP add to the GPU?
GPUs have been “multi-core” designs for the last 6-8 years. So what are
you waiting for?
Greetings
Knax
@Jayson, according to this article you can "accelerate
double precision iterative solvers for Finite Element simulations with current GPUs by applying a mixed precision defect correction approach", see:
http://numod.ins.uni-bonn.de/research/pape…tTu05double.pdf
Double precision on GPUs (Proceedings of ASIM 2005): Dominik Goddeke, Robert Strzodka, and Stefan Turek. Accelerating Double Precision (FEM) Simulations with (GPUs). Proceedings of ASIM 2005 - 18th Symposium on Simulation Technique, 2005.
I was more thinking about true multi-core chips with several cores inside one chip (like AMD X2 and Intel Core2 Duo), and not several seperate processors on one board (and not SLI with several boards), then it be up to the software/middleware (like the physics engine in a computer game) to decide how to best use those cores to get the most out of them
I think he meant that, for example, the G80 has 128 “cores” on one chip. And GPUs have had at least two cores for a long time. (I don’t know what the first card to have that distinction was.)