Hello all, so I’m a student at the university of washington and I’m doing an independent study where I’m learning how to do some computing using the GPU. So my background is math but I have some limited programming experience, but nothing that great. The professor who overseeing this wanted me to use CUDA but it seems like it is pretty much on me to do everything. So I’m super worried. I don’t any experience in C, I have never used CUDA, and a little experience using the command line. I am REALLY hoping some folks on here can offer suggestions, materials, or sites that would help the learning curve. So far I have just dinked around on the site but not much else. I’ll be working on a macbook and it has a NVIDIA GeForce 320M graphics card. So reading on this site I thought I saw release 3.0 would work for that but would 4.0 work? Is there any real difference?I would GREATLY appreciate all the help and advice I could get. Essentially I’m trying to learn how to do computations using the GPU and then IF I can do that in a somewhat good way, try to apply it to solving a specific problem that would better solved by GPU rather than CPU. Hope that makes sense. Again, thanks again for a helping a noob-wannabe-programmer out!
You should go through progamming guide and best practices guide for a first taste. Also the book cuda by example has some examples that help to understand how some basic stuff works.
There is no way to tell if stg performs better in cpu or gpu. it depends on the application, the coding and many other parameters.
Use CUDA 4.0 - it works well on MacBooks. And as Amanda said, the CUDA C Programming Guide is the first and definitive material to read.
Be prepared through that a GPU implementation on the MacBook may be slower than a CPU implementation (the MacBooks have powerful CPUs but entry level GPUs). Still they are great for learning and programming and you can later move to a desktop with a high-end card and enjoy a nice speedup.