CPU and Memory Requirements for Tesla P100 PCIe

Hi.
Do you have a recommendation of CPU and memory spec for Tesla P100 PCIe when you use it for deep learning?
Thank you for your support.

If you expect your workload to be mostly parallelized, running on the GPU, that leaves the serial portion of the workload to the CPU. So I would recommend a CPU with high single-thread performance and a modest number of cores to avoid falling victim to Amdahl’s Law.

For long-running intense computing jobs I personally favor Xeon processors with ECC support. One example would be the hexa-core Xeon E5-1650 v4 (Intel Xeon Processor E51650 v4 15M Cache 3.60 GHz Product Specifications) for around $620. This CPU has 40 PCIe lanes so it can drive two GPUs with PCIe gen3 x16 interfaces. How many GPUs do you plan to have in the system? For now I will assume one GPU.

As for memory and storage, it depends on what kind of deep-learning tasks you think you will be tackling. How large are the working sets? My personal rule of thumb is system memory size = 4x GPU memory size, which works well across a large range of use cases. For a four-channel DDR4 system memory 64 GB (4 x 16 GB) might therefore be a good starting point. With ECC support that would run to about $800 for DDR4-2400 memory. Choose a larger system memory, e.g. 128 GB, if there are indications that your workload would benefit from it.

For mass storage, you might want to consider the new high-performance NVMe SSDs, which offer much better throughput than classical consumer-class SSDs. However they are also considerably more expensive, so you will need to assess whether they will be cost effective.

Note that your configuration choices may ultimately be limited by what range of configurations your system integrator offers. System integrators also usually add a hefty margin to the component prices indicated above.

njuffa,

Thank you very much for your reply.
That’s exactly what I needed!