I’ve been working through the developer documentation för CUDA over the past few days and there’s some parts of it that still isn’t quite clear to me. The “Global Memory” feature is explained as the developer being able to have full scatter/gather-read from the memory pool, but which memory pool does this refer to? The GPU video-RAM or the main system RAM? The Parallell Data Cache is a memory pool for the ALU:s to use in cooperation, but if the main RAM is what global memory refers to, then what’s the term for the video-RAM? Is that the Local Memory?
Thank you in advance,
- Andreas Eklöv