I am using UVM to test moving work from one GPU to another. I start out on GPU0, allocate some managed memory, and at some stage I synchronize everything using cuCtxSynchronize and use cuCtxSetCurrent to transfer all the work to GPU1 including setting advise to preferred location to GPU1. I observe Nvidia-smi for utilization on GPU0/GPU1. I expected that when switching to GPU1 nvidia-smi will show no utilization on GPU0, however I still see 1% utilization on GPU0 although all work is done on GPU1. I wonder if although the memory is unified across all GPUs, GPU0 is still accessed in some way although everything is now on GPU1. Unified memory documentation states that with managed memory, physical memory location is determined from the pointer value - does this mean that some portion of the memory (allocated on GPU0) is always accessed by GPU0? I am running on a host with 2 V100 GPUs.