Is Cuda parallel or concurrent programming language?


I wonder if Cuda parallel or concurrent programming language? anybody can make it clearer.

I’m not sure how you draw the distinction between “parallel” and “concurrent”, although I think I would lean more toward “parallel”. CUDA is a C extension for SIMD programming without explicit reference to SIMD vector registers. In order to make this work (since you are writing scalar instructions at the thread level that are implicitly SIMD across many threads), you write programs in a data parallel style, rather than a task parallel style.

…and communication between parallel execution units is via a tiered shared memory approach, with different levels of visibility. There is no explicit message passing in the language or the hardware, although I suppose such a thing could be built on top of the shared memory semantics. (Note, when I say “shared” here, I mean in the general sense, not specifically the “shared memory” in CUDA.)

thanks. what I really mean …is the kernel execution paradigm concurrent or parallel. I think of it as parallel because of the multicore platform when considering GPU.