Hi,
Is it possible for the CPU to use the DRAM in the graphics card. Also if that is possible we can program the GPUs for doing serial operations and other operations; thus eliminating the CPUs entirely. Is this possible?
Thanks
Hi,
Is it possible for the CPU to use the DRAM in the graphics card. Also if that is possible we can program the GPUs for doing serial operations and other operations; thus eliminating the CPUs entirely. Is this possible?
Thanks
It might be possible for the CPU to use the RAM in the graphics card through some kind of DMA magic, but why would you? Going through the PCI-Express bus, the latency would be horrible and bandwidth would be much lower than you could get from system memory.
The answer to your second question is also no, given the way CUDA works now. The host has to initiate kernels, though I don’t know if that is a hardware or software limitation. Again, though, this would provide no benefit. Running a single-threaded task on the current GPUs would be very, very slow. From the perspective of a single thread, the GPU would look like a 375 MHz processor without branch prediction and a microscopic data cache.
This is why the Cell processor includes a PPC core on the chip to control things. The SPEs are great number crunchers, but would be horrible at running the single-threaded parts.
To extend the answer of Seibert, in fact a GPU is essentially a full-blown CPU, able to do almost anything you could expect from a CPU, and moreover execute more than 5000 threads (or more) simultaneously. But each thread executes slowly compared to actual CPU cores.
A GPU is inherently faster as computing because of the number of threads in-flight at a given time, but using it to execute ONE or less than a big thousand doesn’t enable to use it’s real processing power.
Anyway the architecture is so different that you may never be able to have current desktop or server software running efficiently on a GPU, even if it could help for: HTTPS, SSL, encryption, authentification, virus scanning, face detection, and many many other desktop or server-related tasks.
We had this seminar today at ASU on SIFT algorithms on CUDA and the person had got 200x speedup on the convolution kernel. But there are still components in a program that are not parallizable and out of order execution issues where CPU’s are faster. We are working here in ASU on improving the compiler for to make things run on multicores especially on CELL BE and we have just started stepping into the CUDA domain.
Unless some magic really happens CPU’s are still going to be alive.
yeah, not sure why anyone would think that big out of order CPUs are going away anytime soon.