DIfference betweem CUDA and GPGPU

Hello to all, What is the different between CUDA and GPGPU ?


There is none, because the two terms describe entirely different things.

CUDA is a hardware architecture and programming model for GPUs, and more generally for fine-grained parallel settings. GPGPU abbreviates ‘general purpose computations on graphics processor units’. You do GPGPU with CUDA. One thing that causes misconceptions is that NVIDIA marketing switched from GPGPU to ‘GPU Computing’ with the introduction of CUDA back in late 2006. I still use the two terms synonymously. The other is that techniques developed in the early days of GPU Computing are now often called ‘legacy GPGPU’, meaning programming graphics hardware with graphics APIs like OpenGL. We use this nomenclature on GPGPU.org as well.


Advantages of CUDA compare to CPU programming ?

Does CUDA has stack pointer or instruction pointer ?



Program counter: Yes, the PC is specific to each multiprocessor (and to each warp waiting for execution in the scheduling queue).

Stack: No there is no stack. When needed for passing compound data types (structs, objects, arrays) to functions a thread-local storage in global memory is used. This is slow, the programmer should avoid it when he can.

Right now current CUDA hardware (compute capability <= 1.3) has serious disadvantages over a CPU. The next architexture “Fermi” will change this, among other things a unified address space for shared, local and global memory and the introduction of a L1 cache.

And the compiler aggressively inlines most device function calls to eliminate much of the need for a stack in the first place.