Multi-GPU Setup

I am looking into building some workstations for deep-learning development on tensor flow. The main limitation is GPU memory so I was thinking of packing the maximum amount of 1070 (best GB for the money) in each of them. I have a few questions.

  1. Right now when I run some deep network on my 1 cpu 1 cpu setup I see the cpu is constantly at 100% even if the whole model is running on the gpu, so I was thinking of going for a 2 cpu setup with a mainboard like opinions? An alternative could be a single cpu setup with something like Would the single cpu be a bottleneck?

  2. I was thinking of using riser cards to fit 7 gpus into each workstation using a large case like Do the mainboards above even support 7 cards? any experience with a similar setup and anything you would advise?