Implied static storage of __constant__ variables and what happens if two .cu files both use lots of

I am working on a rapidly growing project that makes use of CUDA. I have decided to place CUDA kernels that perform different
tasks in separate .cu files. But I have one question regarding the implied static storage of constant variables.

Say I have one .cu file that defines constant variables that together use almost 64kb of space. Then I have another .cu file that also defines a lot of constant variables using close to 64kb as well.

Considering that the GPU only offers 64kb of constant memory, I am not sure what will happen.

I assume this sequence of calls will work:

  1. from first .cu file: cudaMemcpyToSymbol(constant variable in first .cu file, source data)

  2. from first .cu file <<<call Kernel from first .cu file>>>

  3. from second .cu file: cudaMemcpyToSymbol(constant variable in second .cu file, source data)

  4. from second .cu file: <<<call Kernel from second .cu file>>>

But is the following safe?

  1. from first .cu file: cudaMemcpyToSymbol(constant variable in first .cu file, source data)
  2. from second .cu file: cudaMemcpyToSymbol(constant variable in second .cu file, source data)
  3. from first .cu file <<<call Kernel from first .cu file>>>
  4. from second .cu file: <<<call Kernel from second .cu file>>>

Will the constant variables from both .cu files be assigned to the same memory location on the device and hence the constant variables assigned in step 2) overwrite those assigned by step 1) ?

Or are the variables kept in actually physically different memory locations and the 64kb constant memory is simply “paged” into the respective memory location specific to each .cu file?

I guess I could find out by experimentation, but maybe some of you already know the answer.

Christian