MPI and CUDA on different compute capability GPUs

We’re looking at expanding our GPU compute capability and how well we can mix Kepler and Pascal series GPUs.

Suppose we had two boxes, otherwise identical, but one with K80s and one with P100s. We want to run CUDA-enabled MPI code on them. I believe this should work - we can use exactly the same (appropriately compiled) binaries on both machines, and as long as the MPI exchanges only appropriately typed and translatable data there wouldn’t be a problem with the fact it’s executing on heterogenous architecture?

Just want to be quite sure before investing!