examine

For example, I allocate an integer A in GPU1 and allocate an array B in GPU2. Can I execute a function that includes the expression
Code:
B(1) = B(2) + A
?

Same answer as your other post:

The device memories on the devices are separate so device A can not access device B’s memory. This is true for OpenACC, CUDA Fortran, and CUDA C.

The closest you can come to this is if you use CUDA unified memory where the runtime manages the device memory for you. Then both GPUs will “see” the same memory.

Though compute would still not be implicitly split between the two devices. Maybe in the future this will be possible, but not currently.