GPU to CPU What do GPU's need?

What else does nVidia need to do to their GPU design to make them run independent of CPUs so that they act fully like CPUs?

Do you have a use case in mind for this? It would probably be easier to put a CPU on the board than try to make the GPU autonomous.

Well I’m just looking at the market and I see how CPUs are becoming more like GPUs (multi-core, Cell, Intel’s Teracore) and GPUs are becoming more like CPUs (Shader Processors, DX10, Physics processing, CUDA). GPUs are of course more powerful than CPUs in the area of mathematical calculations. By this assumtpion, wouldn’t they stand as a premium product in a gaming system in which you would for instance replace the PS3’s Cell with a GPU?

Even looking at large scale servers that use up to 512 Itaniums or 32 Xeons under a single OS partition, these servers IMO would work much more efficiently if they just run off of GPUs and GPUs alone. What do GPUs need to have to make this possible? Double-Precision FPU?

GPUs lack the I/O and virtual memory capabilities needed for a modern CPU.

What do you mean by I/O and virtual memory capabilities? I thought the north/south bridge takes care of I/O functionality and an abundance of RAM removes the need for virtual memory.

GPUs won’t replace CPUs in systems largely because their performance on serial tasks (or serial components of mixed tasks) is so poor. In our own application, we see more than a factor of 100 between the performance of a single thread running alone on a GPU vs. the CPU. So we can still see a win for the GPU, with thousands of threads… but not every task is going to have that much parallelism.

If there’s a task that contains 10% serial work, where only one thread can operate at a time, and the GPU runs this 100 times worse, then you’ve got serious problems.

There are other issues, too, but this one is fundamental and unlikely to change.

Geoff.

But isn’t parallelism the movement CPUs/programs are taking (with games like Crysis claiming it can utilize an unlimited number of CPU cores)?

To put this in terms of what will benefit from GPUs acting as the CPUs and having no CPU element in the computer, would a virtual world like those made by games today (or prediction programs such as those for weather) benefit by running off of GPUs and GPUs alone?

Regardless of everyone’s enthusiasm for parallelism, some tasks remain stubbornly serial. A cliche about this is that it takes 9 months to produce a baby, and assigning 9 women to the task will not allow you to produce a baby in a month. The statement about Crysis is probably perfectly accurate, for values of ‘unlimited’ that range from 1 to 16 or so. :-)

I myself do not believe the statement to the extent Crytek said, but I’m merely repeating what they said. It should be noted however that they were talking about the advantages of quad core over dual core. Most likely updates to Crysis will feature greater MP support. Anyway, would clock and IPC therefore help in serial applications, and if so what is the IPC of GPUs?

When I say virtual memory, I don’t mean a swap file:

http://en.wikipedia.org/wiki/Virtual_memory

There’s a lot more to general purpose CPU architecture than you might think.

Meanwhile, G80 does have virtual addressing. :) CUDA context memory spaces can be thought about as virtual adress spaces.

Indeed, it has virtual addressing, and even some kind of page protection. I noticed that a shader can read its own cubin code somewhere in memory, but not overwrite it :)