Multi gpu and constant memory help


Is it possible to use the constant memory of each device in a multi gpu setup? I’ve used constant memory for a single gpu by declaring them as global variables. Now I’m clueless as to how to declare, initialize (cudaMemcpyToSymbol) and use constant memory in the kernel for each CPU thread in a multi gpu setup. Any example codes and pointers for a beginner?

Thanks! :)

You should do it as you already do. Its bounded to the current context - i.e. if you’re in CPU thread 2 and use cudaSetDevice(2) then just copy the data

to the constant memory and use it - it will belong to GPU #2.