Can I allocate one array on GPU that can lie on two different GPUs RTX 2080 (nvlink)?

As known, RTX 2080/Ti/Titan supports SLI that can share their memory at different addresses in a flat address space NVIDIA SLI GeForce RTX 2080 Ti and RTX 2080 with NVLink Review - Conclusion | TechPowerUp

With NVLink things have changed dramatically. All SLI member cards can now share their memory with the VRAM of each card sitting at different addresses in a flat address space. Each GPU is able to access the other's memory

Can I allocate one array on GPU that can lie on two different GPUs (the first part on GPU-0 and the second part on GPU-1)?

Or are there in the NVlink all restrictions and abilities that were in the common P2P (GPU-Direct 2.0) over PCIe?

The short answer is no.

If you allocate an array via cudaMalloc, the array allocation will take place on the GPU most recently selected with cudaSetDevice, or 0 if no call to cudaSetDevice has occurred.