CUDA Education How best to progress with learning CUDA


I’ve been trying to learn CUDA and have watched the webcasts and the excellent Dr Wen-mei W. Hwu stuff.

However I don’t really have any applications in mind at present and am not sure how best to progress.

I’m an experienced programmer and have a bachelors level education in computer science and am trying to improve my math.

Does anyone have any advice on how to learn CUDA / parallel programming / HPC ?

Are there any good books with exercises for CUDA ?

Are there any good short courses ?

I have considered an MSc but I’m no spring chicken :)



Take one of the CUDA SDK samples and modify it thoroughly. e.g swap out an image processing algorithm against another - the SDK code gives you an instant visual feedback on your results.

Or pick up one of the “contest ideas” from the dedicated CUDA contest forum and try to make it happen.

Or take someone’s failed CUDA attempt (that was posted with source code in the forums) and make it run… and make it run fast.

Or take an existing open sourced application…and try to accelerate it with CUDA.


Ive started with the programming guide and the simpler examples from the SDK. It took me quite a while to “get” it all, but its possible that way. The U. Illinoi course helped too.
If you do not know about parallel programming at all, “Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers” got me through my BA class, but i dont think its the best book out there. I didnt find it particularly fantastic… but it did the job.

I believe there will be some pre conference tutorials at the upcoming nvidia gpu computing conference at the end of september, but its in san jose so quite a way from you.

The best way, aftrer some basic understanding, is to jump in. The first attempt i did was to port a simple “box with bouncing spheres” i did at the begining of my BA. It was a complete failure, but it did point me towards whats tough, and what works.

I’m not sure exactly what you have in mind for CUDA, but if you want to do numerical analysis (linear algebra, etc.) a parallel algorithms book is an absolute must (unless you want to get a good numerical analysis book and derive your own algorithms, which would be great for learning the maths).

Also, visit the forums here at least twice a day and look through the new topics in the “General GPU” and “Programming and Development” forums. There are a good number of major and minor problems that new users often run into, and if you read the forums frequently you’ll learn how to deal with them properly.

As far as actually doing some programming with CUDA, you should find a topic you’re interested in, and try to write a small application or library that solves a certain problem (for example, a matrix multiplication routine, or some kind of simulation). When you get it working, you could post your code up on the forum for others to learn from, as well as to get feedback from the more experienced users (they may find some places where your code can be optimized).

Thanks everybody for your suggestions, I will have a look at the CUDA contest. I may even go to the conference if theres a few intermediate tutorial sessions.

Profquail I’ve never really done much proper parallel stuff, do you have any favourite books in mind ?

Well, there’s the notes from the Parallel Computing course at U. Illinois (which may be what Ailleur mentioned):

I also have a book that I found in a pile of ‘giveaway books’ a while back (books that people in our CS department wanted to get rid of) called ‘Parallel Algorithms Made Easy’ (ISBN: 0471251828). It’s not the best book on the subject (it even contains a few small errors), but it is easy to read and has a good overview of a bunch of different topics (‘Foundations of parallel computing’, graph algorithms, searching/sorting, linear algebra, etc.) so I’d say to get that one if you can find a copy for a reasonable price.

Like I wrote earlier though, if you plan on doing a lot of numerical calculations, getting a book on numerical analysis is invaluable (though I’m probably a bit biased, because that was my undergraduate field of study ;). My book for our “Intro” class on the subject was pretty easy to read as well, it’s “Numerical Mathematics and Computing” (5th. ed., ISBN: 0534389937). It’s important to learn because it will help you understand why the numerical algorithms must be designed specially (to minimize roundoff errors), and normally the algorithms for a given problem have different versions for parallel and serial computation. But again, this subject is really only important if you’re doing linear algebra or other numerical computations; if you’re doing something like the ‘bitonic sort’ (for example), you won’t need this.