shared memory and syncthreads question

Hi everyone,

I have a question concerning the shared memory in combination with syncthreads.
When i need to load some data into shared memory or initialize it, i use for example:
[font=“Courier New”]
int threadNo = blockDim.x * blockIdx.x + threadIdx.x;
if(particleNo < sharedMemValues)
{
sharedPotentials[threadNo] = 0.0f;
}
__syncthreads();
[/font]
With the “syncthreads” command to ensure that all values are set before the programm continues.

The cuda manual says, “syncthreads” means, that all threads of that block wait at that line. But I have more than one block, isn’t it possible that that other blocks acces the shared memory, before “block 0” is finished with the setting of the shared memory, or even that block 2 is executed before block 0 and the shared memory isn’t set?

I hope you can understand what I mean! → What am I supposed to do?

Thx,
Philipp.

Shared memory is only seen by its own block. Every block has its own shared memory. The “shared” means shared between the warps (and threads) of that block, not shared between blocks.

If you want to have multiple blocks intercommunicate, you usually read and write global memory, and use kernel launches as barriers. In some cases global memory atomics are useful too.

Hey, thank you very much, i’m sorry i’m no professional programmer (just electrical engineer) so i don’t understand all that manuals that fast :)