Do kernels launched with CUDA MPS share global memory bandwidth?

Do kernels launched with CUDA MPS share global memory bandwidth?

For example, if I have kernel A and kernel B, each reading & writing to global memory, will they share the bandwidth? Or will the global read & write requests of the two kernels be serialized so that they wait for the other to finish?

If the kernels run concurrently, they will share the memory bandwidth.