Hardware recommendations for multi GPU HPC setup

Im building a HPC machine with multi GPU setup.
Most of the CUDA based calculations will be performed in single precision. Probably also a bit of Deep Learning will be done on the machine. No SLI is required. However a huge amount of standard RAM is required

I have decided on the following:

Motherboard

Asus X99-E Workstation DDR4 Motherboard
This motherboard has  a PLX chip to support extra PCI lines
[url]https://www.scan.co.uk/products/asus-x99-e-ws-intel-x99-s-2011-3-ddr4-sata-iii-6gb-s-sata-raid-pcie-30-(x16)-ceb[/url]

GPU: 4 x 1080 GTX

MSI NVIDIA GeForce GTX 1080 8GB SEAHAWK X Watercooled
[url]https://www.scan.co.uk/products/msi-geforce-gtx-1080-seahawk-x-8gb-gddr5x-2560-core-vr-ready-graphics-card-with-corsair-hydro-h55-ai[/url]

CPU: i7 Core i7-6850K

RAM: 128GB
2 x Corsair Vengeance Blue LED 64GB DDR4 3000 Memory Kit 4x 16GB
[url]https://www.scan.co.uk/products/64gb-(4x16gb)-corsair-ddr4-vengeance-led-pc4-24000-(3000)-non-ecc-unbuffered-cas-15-17-17-35-blue-le[/url]

SSD (for operating system)

Samsung 500GB 960 Evo PCIe Solid State Drive/SSD MZ-V6E500BW
[url]https://www.scan.co.uk/products/500gb-samsung-960-evo-v-nand-m2-pcie-gen-30-x4-nvme-11-3200mb-s-read-1800mb-s-write-330k-330k-iops[/url]

CASE

Thermaltake Core X9 Stackable Large Tower Case
[url]https://www.scan.co.uk/products/thermaltake-core-x9-stackable-black-e-atx-case-with-side-window-plus-4x-usb-30-w-o-psu[/url]

PSU

EVGA SuperNOVA 1600W Modular Power Supply
[url]https://www.scan.co.uk/products/1300w-evga-supernova-g2-80-plus-gold-full-modular-sli-crossfire-single-rail-1083a-plus12v-1x140mm-fa[/url]

I have spend some time to design this config,
could you please provide any comments if some pieces could be replaced or if there are some obvious miss-configurations in this set-up

There are people in these forums that have built ambitious multi-GPU systems similar to yours, so definitely wait for them to chime in. Here are some thoughts of mine you may want to consider:

I assume you are looking for a single-CPU socket workstation configuration. I would suggest a CPU with 40 PCIe lanes, for maximum throughput between the CPU and GPUs.

Since it looks like most of the parallelizable work is to be offloaded to the GPUs, I would suggest going for fast CPU cores (>= 3.4 GHz base clock), rather than a lot of CPU cores, so as not to become limited in the serial portions of your workloads.

High throughput system memory is always a good idea for an HPC box; since you also want quite large system memory I would suggest a CPU with ECC support so the integrity of all that storage is ensured.

The previously items taken together lead to e.g. Xeon E5-1650 v4 (~ US$ 630; 6 cores; with four-channel DDR4-2400 up to 76.8 GB/sec theoretical bandwidth).

For PSUs, I would suggest 80PLUS Platinum rated at this time, they aren’t hugely more expensive than the Gold-rated ones (+ $50 at most) and you will save on electricity over time. If you prefer the EVGA brand, you may want to look at their SuperNOVA P2 1600W. The sum of the nominal wattage of all system components should be 50%-60% of the nominal wattage of the PSU for optimal efficiency and robustness. The summed wattage of your components will be around 1000W it seems, so use of a 1600W PSU looks about right.

[Later:]

It seems that the Core i7-6850K CPU you listed above is basically the consumer part equivalent to the Xeon E5-1650 v4 I suggested. So if you decide to stick with consumer-grade components rather than using workstation-class components, that seems like a good choice.

Best I can tell from internet research the performance gain from “overclocked” memory is minimal, e.g. a four-channel configuration at 58 GB/sec measured memory bandwidth at DDR4-2400 vs 63 GB/sec measured memory bandwidth at DDR4-3200. The raw performance difference would dilute to noise level in terms of application-level performance. So simply going with DDR4-2400 may save you some money.