Which GPU to choose for starting CUDA-Development?

Good morning,

I will begin my CS studies this October and I am aiming to intensify in CUDA Development for Machine Learning and Machine Vision.
Well to have an idea of how to code in CUDA I would love to practice a little bit in advance.
Since I bought a Radeon 6970 when I got my PC and not an Nvidia card I am planning to get a very cheap one off eBay.
My plans are to either get a GPU that just supports CUDA like the GTX 260 (would be enough for me since I don’t game on it) or a GTX 660 (Ti) because I want the SHIELD Tablet X1 when it comes out and use Gamestream.

What would you as experinced developers in this field suggest me to do? I would prefer not spending more than 70-90$ on that GPU on eBay.

Greetings Steve

Hello Steve,

This is an interesting question, to me. Let’s see if any experienced developer will reply !

Bye

That’s an old post you responded to, but the GT 720 is a decent card that can be found for about $90.

While a lowest-end GPU can be a great choice for simply learning CUDA, I am doubtful whether that is a suitable choice for exploring machine learning or machine vision. I wouldn’t be surprised if there isn’t any significant speedup to CPU versions of the applications in that space.

You may be able to get a modern GT 1030 with 2 GB of memory for $90 to $100, which should be faster than the older GT 720 suggested by the previous poster.

https://www.geforce.com/hardware/desktop-gpus/geforce-gt-1030/specifications

Of course that’s right for his longer term plans, but OP said he wanted something “to have an idea of how to code in CUDA” and to “get a GPU that just supports CUDA”.

If you are right about the price of course the 1030 would be preferred. Thanks for mentioning it. At least my post had the effect of provoking yours. :-)

Questions regarding hardware recommendations in a specific price range are notoriously difficult to answer because it is usually not clear where a poster lives, and how much their typical shipping charges and sales tax / VAT might be. I see the GT 1030 listed for as low as $89.99 on Amazon in the US (i.e. price prior to taxes and shipping). That’s all the information I have.

With GPUs, when choosing between similarly priced cards, it is usually best to pick the latest model, as features like performance and debugger / profiler support tend to improve with increasing GPU generation.

I will assume you (the reader) live in North America or developed/emerging-market nations in EU/AsiaPAC that implement the concept of “minimum wage” for workers. The suggestions below will save you money assuming that you could spend the time otherwise wasted on a job earning minimum wage.

I will defer to njuffa on any GPU/Environment specifics he disagrees with, but I can offer advice (1) as a long-time ML/Vision person with a lot of experience onboarding new people, and (2) also as a relative noob wrt CUDA who got serious about CUDA/GPU over the past 12 mos, plenty of scar tissue as a result [note that most of my previous PC-based ML experience was using multi-core concurrency and MKL/Atlas, but prior to that I programmed many supercomputers that are GPU/CUDA’s intellectual ancestors]:

As a preface, remember that the first step of effective problem solving is to ask the right questions, and asking which GPU to buy may be less relevant than some other considerations that affect the usefulness of a GPU to your learning experience.

  1. ML (more properly, Adaptive Statistical Algorithms) regardless of whether the application is vision, games, investment, etc, is ultimately an independent subject from High Performance Computing (HPC). Being aware of the separation will enhance your ability to use them together. And, just to burst this bubble up front, expecting to learn either of these subjects via frameworks such as Tensorflow is analogous to learning about design and drawing by using crayons to fill-in a coloring book. The important ML programming concepts can be learned without a GPU on relatively small problems. so you can use any computer for that. Likewise, most GPU programming techniques are best studied in a simple context by themselves rather than introducing them as part of a more complex learning algorithm. Frameworks can be useful but are not a substitute for understanding.

  2. You need to keep gaming and HPC (High Performance Computing) separate. So a Radeon is perfect for enforcing that. However, rule (0) above is your friend here: learning about ML itself does not require a GPU, and as a matter of fact often does not require a computer (for example, you will want to review or study statistics and vector calculus from a book with exercises).

  3. Overclock nothing. Do not purchase hardware that is factory overclocked. There is enough that can go wrong already.

  4. You will need to run Redhat Enterprise Linux 7.4 (RHEL 7) on your GPU machine. Not Windows. Not Ubuntu. Not a mac. You can get a full-blown RHEL 7 license for free by signing up for redhat’s dev program. If you use Ubuntu, Mint, SLES, etc, you will wind up spending the majority of your time debugging and reading conflicting advice on Stack Exchange and AskUbuntu instead of learning about ML/Vision. To reiterate Rule (1), this only applies to GPU hardware, you can run learning algorithms on anything.

  5. Get the latest GPU hardware that is not overclocked, preferably made by NVidia, and if not NVidia get something that is the “reference design.” I’d take modern over powerful, as you are a little less likely to run into missing more capabilities. So, i.e. at the time of this writing, if you are on a budget get a 1050Ti or, ideally, a 1070Ti. I would avoid used equipment or ebay. Get a GPU that has the reference number of PCIe power plugs, and plug them into separate outlets on the power supply.

  6. Your motherboard and Power Supply are of utmost importance. PSU should be 2.5x - 3x the rated power of your GPU(s). You want the boring-est motherboard you can find, preferably a “workstation” motherboard (or “server,” provided it only has one CPU), which has a CPU that is 1 or 2 generations behind the current latest intel offering - i.e. x99 or x270. You should also be aware that you effectively cannot use your GPU for computing and OS Display graphics at the same time. The ideal situation is to get a motherboard with “onboard graphics” and plug your monitor into that - don’t have a video cable attached to the GPU.

  7. If you are reading this on the Nvidia forums you have a developer membership, and there are lots of good materials and sample code for CUDA programming- doing those, in C/C++ is a good place to start.

  8. Your best source for learning the basics of ML is the free course offered by Stanford University’s Lagunitas program called “Introduction to Statistical Learning with R,” taught by profs Tibshirani and Hastie. There is no charge, and past courses are archived so you can begin at any time. It uses R and RStudio which are free, as is the pdf version of the textbook. It is a class intended to be suitable for any undergrad and assumes only that you have taken some rudimentary form of probability / statistics course. It is helpful but not absolutely necessary to have taken first semester calculus (i.e. you know that the derivative is zero at the min or max of a function).

  9. After doing those things you should have some idea of where frameworks can help, and why you might choose one over another.

  10. Note that most people who seriously do ML and have a choice in the matter use some combination of C, R, Matlab, and/or some specialized language. IE, languages like Python and Java are widely supported, but more because of the demand by their installed base rather than because of their suitability (as a matter of fact, standard Python has some blocks to full concurrency). But enough talk of religion.