share informacion among threads

Hi forum

Im have a wireless sensor network simulator, with a lot of nodes. Each node send information to other nodes.
The code of this simulator works in sequential mode.

What a want is to change the sequential code to parallel code, with cuda.

I want to know, whether is possible to assign each node to a gpu-core and share information (like write-read methods) between different threads, and if this is possible, how can i do it?

thanks in advance

I don’t know if you can work with gpu-cores. Instead you use threads. In a block, you can share information through shared variables. For example, if you use 64 threads in a block, then you could create a shared type var_name [64], and then every thread in your block could read and write over var_name. There is a physical limitation of 512 or 1024 threads per block, depends of your hardware. So, if you need more than that quantity of nodes, then it’s harder to share information.

I don’t know what your simulation problem is, but you don’t see like you want to spend some time reading about how cuda works. Maybe it works better if you use openmp. If it is possible, it’s easier to parallelize with openmp than with cuda; and one of those things that are easier, is to share information. Of course, I don’t want to unmotivate you, it’s just to be practical.