Dear all
as its known, the CPU is managed by the operating system, and if i’m not wrong, memory access is managed by OS in which when two threads access a memory location at the same time, there is no problem and no dead locks or contention on the resource (the two threads read memory at the same time), but if they want to write, one of them do that .
Does that true for GPUs when we program it with cuda C ?
and who do the management of the global memory?
with regards