memory access in cuda

Dear all

as its known, the CPU is managed by the operating system, and if i’m not wrong, memory access is managed by OS in which when two threads access a memory location at the same time, there is no problem and no dead locks or contention on the resource (the two threads read memory at the same time), but if they want to write, one of them do that .

Does that true for GPUs when we program it with cuda C ?
and who do the management of the global memory?

with regards

Yes, it’s also true for CUDA C. Any number of threads can read the same location, with no deadlock or contention.

Any number of threads can write the same location, but only one of the writes will end up in the location.

The GPU driver acts as the operating system for the GPU, and manages global memory.

thank you txbob for your contribution

where can I find this info as a scientific documentation?