Hello,
Suppose each cuda thread has it’s own piece/block of memory where only the cuda thread itself is reading/writing from/to (in global memory).
So one can consider such a situation as “a single cuda thread”, the rest behaves the same (on it’s own piece of memory somewhere else in global memory).
So my question is:
Can a single cuda thread immediatly read a global memory address/variable/cell after it just wrote to that same memory address/variable/cell (and vice versa) ?
Or do even single cuda threads have potential “read-after-write” issue’s (and vice versa) ?
For example cuda thread/kernel does the following:
GlobalArray[1000] = 5;
GlobalArray[1000] = GlobalArray[1000] * 10;
GlobalArray[1000] = GlobalArray[1000] + 1;
GlobalArray[1000] = GlobalArray[1000] * 7;
GlobalArray[1000] = GlobalArray[1000] / 4;
GlobalArray[1000] = GlobalArray[1000] % 100;
GlobalArray[1000] = GlobalArray[1000] + 66;
Would this cause “race conditions/issues”/“read-after-write issues”/“write-after-read issues” ?
Or would that code above execute safely/consistently ?
I guess this would be safe, as long as the same thread executes this code… since the thread executing this code will probably “block” on the memory accessess and make the cuda core switch to executing another thread… later the cuda core or another cuda core will return to this thread and happily continue executing this code… so there shouldn’t be any race conditions (?)! External Image :)
Bye,
Skybuck.