CUDA Libraries

I read that CUDA 5 NVCC supports creation of static libraries. However, I thought that earlier versions of CUDA already supported library creation. Can someone please clarify on this?


CUDA 5.0 adds support for device-side linking (static linking only for now), which allows the creation of device-side libraries. So far, “libraries” for the device had to be implemented as collections of header files. An example would be CUDA’s standard math library.

It has always been possible to build libraries of host-callable kernels with CUDA by using the standard host-side linker. Examples are CUBLAS and CUFFT.

So are the device side libraries stored in the Device’s global memory? What are the benefits? I’d guess these two:

  1. The time it takes to load the library to the device is saved.
  2. The library can be compiled in a device specific way.

I also read this after my last post:

So the pre Cuda 5 one couldn’t have separate .o files? Does multiple .o files imply each .o can have one or more callable Kernels?
I’m assuming in above referenced PDF ab.culib is device side? Per user defined call backs, would these be used to notify of and accept Device computation results?

Prior to CUDA 5.0, a device function in one .o file could not call a device function in a different .o file, as there was no linking of device code. So instead of having a cuda_mathlib.a, with code for sine, cosine, exponential, etc as subroutines callable from user’s device code, the CUDA math library had to be provided as a set of header files. Any CUDA user wanting to provide a device-side library would run into the same issue.

I have not had a chance yet to explore the new linker and device-side libraries. These capabilities should be available in the posted CUDA 5.0 preview available to registered developers. I would encourage anybody interested in this functionality to explore it. Please file bugs on anything that doesn’t work as it should.