Optimal multi-GPU system

Dear all,
I am building a 8-GPU system for machine learning (SVM) and looking for guidance/confirmation if I am doing this right.
So far, I am thinking the system would comprise of the following:
Titan Xp (1)
1080 (5)
1030 (2)
One that supports NVLink (thinking SuperMicro)

My questions are:

  1. is it ok to have different types of GPUs, or should they be all the same ?
  2. What is the appropriate way to connect the GPUs ? Would I need SLI or something similar, given that the motherboard and the cards are all Pascal and would support NVLink ?

Later: OK i see the thread on the other posting. You can disregard these comments.

none of those GPUs that you list use NVLink. They are all PCIE GPUs. You don’t need SLI. The only GPU conveniently available today supporting NVLink is Tesla P100. (Tesla V100 will soon be available on NVLink also).

The 1030 GPUs are not going to be that interesting from a compute perspective. For learning, etc. they are fine. But not very powerful at all. I would see little point in having 2 of those.

Which codes for SVM were you planning to use that are GPU accelerated?

I am using LIBSVM , from Athanasopoulos, A. Dimou, V. Mezaris, I. Kompatsiaris,
Available at http://mklab.iti.gr/project/GPU-LIBSVM