Concurrent access the same page from two GPUs

Hi, I’m wondering how cuda unified memory system handle simultaneously accessing non-overlap memory within one page from two GPUs? Does it trigger back-and-forth page fault on both GPUs?

Thanks.