Multi-GPU Setup

I am looking into building some workstations for deep-learning development on tensor flow. The main limitation is GPU memory so I was thinking of packing the maximum amount of 1070 (best GB for the money) in each of them. I have a few questions.

  1. Right now when I run some deep network on my 1 cpu 1 cpu setup I see the cpu is constantly at 100% even if the whole model is running on the gpu, so I was thinking of going for a 2 cpu setup with a mainboard like [url]https://www.asus.com/Motherboards/Z9PED8_WS/[/url] opinions? An alternative could be a single cpu setup with something like [url]https://www.asus.com/Motherboards/X99E_WS/[/url]. Would the single cpu be a bottleneck?

  2. I was thinking of using riser cards to fit 7 gpus into each workstation using a large case like [url]http://www.lian-li.com/en/dt_portfolio/pc-d8000/[/url]? Do the mainboards above even support 7 cards? any experience with a similar setup and anything you would advise?

Thanks!