Building a library for automatic parallel computing with CUDA

Hey guys,

For my final year project, I have chosen to build a library developers could use to do GPGPU computing with CUDA without having to understand the mechanisms behind the different kernel implementations of the CUDA API (a CUDA wrapper, in other words). This library would likely resemble the openMP library. For those who are unfamiliar with openMP, it is an API that supports multi-platform shared memory multiprocessing programming in C where data layout and decomposition is handled automatically by directives. For example, the API parallelizes each code in blocks:

long sum = 0, loc_sum = 0;

 /*forks off the threads and starts the work-sharing construct*/

 #pragma omp parallel for private(w,loc_sum) schedule(static,1) 


   for(i = 0; i < N; i++)


       w = i*i;

       loc_sum = loc_sum + w*a[i];


   #pragma omp critical

   sum = sum + loc_sum;


 printf("\n %li",sum);

In my case, I would like to implement the same functionality for CUDA parallel computing on the GPU. Hence, I will need to build a set of compiler directives, library routines, and environment variables that influence run-time behavior. Every call in CUDA must be hidden from the programmer.

Since CUDA is an SIMD architecture, I know there are many factors that have to be accounted for, especially on the dependancy between iterations. But for now I suppose that the programmer knows the limitations of GPGPU computing.

Now, here is where I need your help. Could anyone one give me any advice on where to start to build such a library? Also, does anyone have any good tutorials that could help me deal with compiler directives or environnement variables? Or, does anyone know any other library that does a similar task and from which I could get a good documentation?

And most importantly, do you think this is a project that can be done in about 1200 hours? I am already a bit familiar with GPGPU and CUDA, but building such a library is new to me.


I think it is too dificult to do it. Check trust project btw. Anyway area of application of it is too small. Imagine, you need cycle not less than about 10000. And a lot of restrictions on inner loops. Personaly I suggest to do something different.

In addition to the library, you need to modify the compiler which is not a simple task.

Check out hiCUDA for a project that has done this already.

I think OpenMP is a very difficult model to start from for such a project. OpenMP in many ways is a superset of CUDA that is best aimed at SMP hardware (multicore, multi-CPU, etc). SIMD on a CPU is a different beast with more restrictions, and CUDA is even more restricted (in some ways) than that. A lot of very natural OpenMP code would run terribly on CUDA through overuse of critical sections and global barriers. CUDA hardware doesn’t give you much help with synchronization primitives, in part because the cost of global barriers and critical sections is huge when you have 10,000+ threads. You’ll want a syntax that naturally guides programmers to do the right thing, without random pits to fall into.

That said, there are projects to push all of the parallel analysis into the compiler, and produce SIMD code with no help from programmer annotations. The GCC autovectorizer:

describes a number of techniques and references various papers which might give you ideas.

With 1200 hours, though, it might be better to narrow the problem domain. A very productive way to leverage CUDA is to decompose a particular problem space into fundamental high-level operations, and wrap those in such a way that you can call them easily from host code. CUBLAS is an example which uses an existing high-level API for linear algebra. PyCUDA does sort of the same thing with the numpy interface in Python. It is much easier to map a more restrictive computation model (e.g. BLAS) to a less restrictive one (CUDA), than to go the other way around (say OpenMP to CUDA).

Thank you all people, you’ve all been very helpful.

For now, I think I’ll write a script in python to parse parallel chunks of code and see where it goes.

One thing I forgot to mention: The main idea is to to give a high-level abstraction of CUDA for computational finance developers. Hence, it would be useful to build a framework for generating Monte-Carlo algorithms on the GPU easily. Anyone have any ideas what kind of high-level operations I could implement?

You might want to look here

And as for a list of high level operations, you can look at matlab’s toolboxes to give you and idea of what people are using.