CLCudaAPI: A portable high-level C++ API with CUDA or OpenCL back-end

This is an announcement of the release of CLCudaAPI, an new way to write CUDA and OpenCL host programs.

CLCudaAPI provides a C++ interface to the OpenCL API and/or CUDA API. This interface is high-level: all the details of setting up an OpenCL platform and device are handled automatically, as well as for example OpenCL and CUDA memory management. A similar high-level API is also provided by Khronos’s cl.hpp, so why would someone use CLCudaAPI instead? The main reason is portability: CLCudaAPI provides two header files which both implement the exact same API, but with a different back-end. This allows porting between OpenCL and CUDA by simply changing the header file!

CLCudaAPI is written in C++11 and wraps CUDA and OpenCL objects in smart pointers, thus handling memory management automatically. It uses the CUDA driver API, since this is the closest to the OpenCL API, but it uses the OpenCL terminology, since this is the most generic. It compiles OpenCL and/or CUDA kernels at run-time, possible in CUDA only since release 7.0. CLCudaAPI handles the host API only: it still requires two versions of the kernel (although some simple defines could omit this requirement).

Let’s take a look at an example. First, to get started, include either of the two headers:

#include <clpp11.h>
// or:
#include <cupp11.h>

Here is a simple example of setting-up platform 0 and selecting device 2:

auto platform = CLCudaAPI::Platform(0);
auto device = CLCudaAPI::Device(platform, 2);

Next, we’ll create a CUDA/OpenCL context and a queue (== CUDA stream) on this device:

auto context = CLCudaAPI::Context(device);
auto queue = CLCudaAPI::Queue(context, device);

And, once the context and queue are created, we can allocate and upload data to the device:

auto host_mem = std::vector<float>(size);
auto device_mem = CLCudaAPI::Buffer<float>(context, CLCudaAPI::BufferAccess::kReadWrite, size);
device_mem.WriteBuffer(queue, size, host_mem);

Further examples are included in the samples folder. To start with CLCudaAPI, check out samples/, which shows how to compile and launch a simple kernel. The full CLCudaAPI API reference is also available in the GitHub repository.

CLCudaAPI is available on GitHub:

Any feedback is welcome!