cuBLAS-XT with different cards

Hi,

I’m planning to gather used Titan’s, Titan Black’s and even a Titan Z.
My question is this : can I use cuBLAS-XT with these different cards at the same time in the same node.
For example 1 computer with 1 Titan Z, 1 Titan, 1 Titan Black on it. Do my 64 bit floating point calculations take a big hit? Or does it work (or scale) as fast as weakest link (in this case: regular titan i guess)

Ps. I got access to premier version of cuBLAS-XT

cublasXT will parallelize the computation across all three GPUs, even if they are different.

However, the workload is balanced equally across all the GPUs, so performance will not be optimal if the 64-bit peak performance is vastly different between the various GPUs in your workstation.

Also, cublasXT no longer requires a separate evaluation license with the latest CUDA 7 toolkit release.

ukapasi,
Ie Is now cuBLAS which included in CUDA 7.0 and above - automatically parallelize execution on all available GPUs?
And now there is no need to download cuBLAS premier?

please check GitHub - linnanwang/BLASX: a heterogeneous multiGPU level-3 BLAS library for heterogeneous multiGPU BLAS.