Parallel computing on two GPUs

Good afternoon, dear forum members!
At present, I have a sharp question - I want to parallelize my calculations with the use of the cuFFT library on two graphics processors, namely, graphics accelerators 2080Ti and 1080Ti. It is known that the GPU are built on different architectures of Turing and Pascal, respectively.
Is it possible to create a multidimensional plan “cufftMakePlan {3d} ()” on these devices with different GPU architecture?

The documentation says that -
"Starting with cuFFT version 7.0, a subset of single GPU functionality is supported for multiple GPU execution.

Requirements and limitations:
All GPUs must have the same CUDA architecture level and support Unified Virtual Address Space."

Thank you very much for your attention!

So, no, not possible. 2080Ti is cc 7.5 and 1080Ti is cc6.1, not the same