<< Hardware choice questions for deep learning >>

Hi Guys,

I am planing to build my own deep learning dedicated machine based on nVidia TitanX.
For that I am planning to buy Asus X99 https://www.asus.com/ie/Motherboards/X99A/specifications/ motherboard.
And here are my questions.

1). Is there anyone here in the community that already has own build based on this motherboard?

2). As Ubuntu is recommended OS, are there any problems with drivers for this motherboard?

3). This motherboard supports multiple GPU’s over nVidia SLI both 3-way and Quad, does this mean acceleration in models training times as well, with multiple GPU’s?

4). Would this be good idea to mix graphic cards TitanX + some cheaper Gforce cards?

Hope you’ll help me get some answers before I spend this mucho of dollars :)

That is a good motherboard, and the cheapest and best CPU for X99 which has 40 lanes is the Intel i7 5930K 3.5 GHz.


which is also the same CPU NVIDIA uses for their pre-built machine learning workstation.
I am building a new PC with one of those now and it is a great CPU, well worth the price. You can comfortably overclock to 3.9 GHz with no issues related to temperature.

If you are using the Titan X you can also consider Windows 7/8.1 OS since you can put in it TCC mode which “hides” the display capability of the GPU from the OS and has slightly higher performance than possible with the native Windows WDDM display driver. IMO it is easier to get CUDA installed and working quickly in Windows vs. linux, but if you are already comfortable with Ubuntu then stick with that OS.

You will need a cheap GPU for the video display and the PCs I have built usually will use a GTX 750ti for video display and the GTX 980/Titan X for compute. So yes it is a good idea to have more than one GPU in the PC build.

Get at least a 512 GB SSD like the Samsung Evo 850 pro, or maybe even two if you would like dual-boot. Those are cheap enough these days to buy more than one. The 1 and 2 TB version tend to be less reliable than the 512/500 GB versions.

You should also use a liquid cooler for the CPU, and those can be a pain to install with some PC cases. Get a full tower and check that the liquid cooler fits into the particular case.

Oh and get at least a 1200W high-quality power supply so you can support multiple Titan X GPUs.

Lots of good advice from CudaaduC. I usually warn against going too low-end on CPU specifications when using CUDA and consider 3.5 GHz or higher optimal for good single-thread performance. While having a dedicated GPU (possibly low-end) for graphics is highly recommended, it is best to use a bunch of identical GPUs for the actual CUDA work, otherwise load balancing becomes much harder.

If you plan to run this rig more or less non-stop, I would suggest considering a PSU with 80plus Platinum rating, to save on electricity and limit waste heat. I recall many days where I sweated away in my cubicle simply because my PCs were giving off so much heat, and now that I compute at home, I do notice the difference in my electricity bill.

I have done equal amounts of CUDA development work on Linux and Windows and find Linux to be a more efficient platform for software development in general. I am not a fan of Ubuntu, though. However, Ubuntu is very popular and NVIDIA seems to be basing their pre-configured DIGITS box on it, so there does not seem a reason not to go down that route.


Thank you CudaaduC and njuffa. Very detailed information.
Good to know that I had some idea on what should this PC contain inside.

I will stick with Ubuntu, kind off tired of all my machines having Windows.
TCC mode, didn’t heard of this one.
And PSU I see matters a lot, saving lots of munnies :)

Nice1 for all the info, really appreciate it.

I am also looking at CUDA programming and just built my system.
The system I just built is a GigaByte X99-Gaming5P motherboard with a 512MB M.2 boot drive and dual 980TI 6GB video cards. Could you please point me to any information on the “Deep Learning” sites you mentioned?

Any information on which Linux Mint 17 drivers to use for CUDA development would be appreciated.

I do not see any references to “deep learning sites” in the previous posts in this thread. It might be useful to clarify what information you are looking for specifically in respect to deep learning with GPUs (presumably the kind of information that cannot be readily gleaned from Google, or Google Scholar).

I am strongly considering replacing ASUS X99 with ASRock X99 WS. More RAM and it costs less than ASUS X99, and if you compare price to the ASUS X99 WS, is almost half price. Any experience with ASRock guys?

You would definitely want to peruse relevant hardware forums to look for a discussion of the trade-offs. If there is a price difference of almost a factor of two, there would have to be some non-cosmetic reasons for this.

I have not built a system from components in many years, but when I still did that, I bought a lot of ASUS motherboards because of their high quality. Obviously superior quality back then does not tell us anything about the merits of today’s products, therefore my recommendation is to visit sites where such hardware is discussed and people relate their experience with various platforms.

njuffa, thanks for keeping me on the ground, not floating and dreaming in the skies.
This was some half price mistake. Thanks a lot for sanity check.

Hmmmmm… now is it GTX 1070 or GTX 1080 better for Deep Learning?
Mucho miliones cheaper though.

So MarcinD what parts did you finally use for your workstation ? Any caveats, suggestions, DOs-DONTs when building workstation based on X99 chipset and 2011-3 socket?

I am too building a new HPC workstation. It will be great to have some feedback. Following is tentative list of parts:

  • CPU: i7-6850k (LGA 2011-3 socket) which comes with 40 PCIe 3.0 lanes and supports Intel VT-D technology. It has no integrated graphics
  • RAM: 16 GB DDR4
  • GPU compute card: GTX 1080 with 8GB GDDR5X VRAM
  • Xeon Phi compute card: 31S1P (requires BIOS support for 4g decoding and Intel VT-D technology)
  • Video Card: probably GTX 780Ti or a cheaper AMD Radeon
  • Motherboard: I was also considering ASUS X99A, but read in a blog that for multi-GPU rigs, it is better to go with workstation class motherboards. So, as of now I am considering ASUS X99-M WS in micro-ATX form factor. Its cheapest. However one problem with this board is that it support maximum 3 full PCIe slots. As every PCIe card occupies dual slot width, I can't fit more than two dual-slot cards on this motherboard. I would have liked to have some space to install a small video card in the third PCIe slot, but it doesnt seem possible. Which means if I buy this board, I cannot use a Compute GPU in a headless configuration
  • 4G decoding issues: Xeon Phi cards require 4G decoding. Though ASUS X99-M WS supports toggling 4g decoding and Intel VT-D technology, however I donot know how enabling 4G decoding in bios will effect working of GTX 1080 GPU ?

Can anyone having experience with building Multi-GPU and Xeon Phi workstation using X99 chipsets/2011-3 socket suggest a good-value-for money motherboard with 4 or more PCIe slots and support for 4G decoding ?

Hey nurabha, apologies to replying with delay, I was away.

I ended up with.

  • CPU - Same
  • RAM - 64 GB with target of 128GB for Spark mainly
  • GPU - Same have only one at this time, planning on buying 2 more TITAN X's
  • Motherboard - same, has capability of running 3-SLI.
  • PSU - Corsair 860W Platinum - might have to upgrade this if I'll go with two TITANS.
  • OS - Ubuntu on bare metal, and dockers.
  • HDD - SSD for os and data processing, rest on mechanical 7200 rpm's storage.

Works beautifully, although I have only comparison to the first AWS GPU instances, which suck btw.
Never used any of those Phi stuff, so not sure why would you need it?

And one more thing, make sure you have big case to fit all this, takes a lot of space, and you do not want to transfer heat from one device to the other, only out.