kernel fails to call a function in a LIB lib+kernel

Hey. These days I have been struggling with how to call functions (located in a static library) in the kernel. My solution includes two projects, one is built for the library, the other is the GPU main function. The GPU kernel would call some functions from that library project. My configurations are as follows:

  1. System: Windows XP SP3, Visual Studio 2008, Tesla C1060, Emulation mode.
  2. Static library: I use the default configurations, and it is compiled successfully. I can find the .lib and .obj files.
  3. GPU project: I add the lib directory to the Additional Library Dependency, and also add the include directory of the library.

Here are my problems:

  1. If I do not have any .cu files in the GPU project, just pure C project, I can compile and use the .lib very well. In other words, the .c file in GPU project can call functions in my library with no problem. But it does not work if I have some .cu file. For example, in my kernel, if I call the function in the library, the compile result comes out with “Link Error 2001: unresolved external symbol”. That function can not be found.
  2. What I want to do is to keep the library part unchanged and call the functions if necessary. I want to make the library portable to as many platforms as possible. So on GPU, I would make the kernel call those functions. But only emulation mode supports it, right? If I switch to Debug or Release mode, then I am not allowed to call the functions in the kernel. Any suggestions on this?

But first of all, is it feasible that I create my own library and use it inside the GPU kernel? My thought is that, the library is compiled by the host machine so it should be located on the host, not on GPU device. So the GPU kernel is not able to fetch the library while working. If that is the case, is there any way to put the library on the GPU? Thanks.

You are correct - the functions in the existing library can only be called from a kernel when it’s running in emulation mode. When really running on the GPU, that library isn’t available; you have to implement the relevant functions yourself.

Thanks for the reply.

This means the codes should be very specific, only on GPU, not portable to other platforms. Not very good~~~

Well routines in *.cu files can be labelled with [font=“Courier New”]device[/font] and [font=“Courier New”]host[/font] so you get two copies. I imagine with a bit of [font=“Courier New”]#ifdef[/font] and Makefile magic, this could produce a ‘universal’ library.

I see what you mean. You are correct. The routines with qualifiers device and host can be called by both. But those routines are from the library, which I am not expected to modify. Our original idea is to write the library source codes with pure C and make it portable to different platforms. The outside program could call any subroutines from that ‘universal’ library, which is a philosophy of software. I guess it does not work for the GPU architecture.

CUBLAS and CUFFT are libraries which work perfectly well with CUDA. It can be done, but you have to modify the code. Only code compiled by nvcc can run on the GPU, so at the very least, those routines will have to be compiled twice, once for the CPU and once for the GPU. They can then all be linked together into a single library, which can check for the presence of a GPU when it loads.

Yeah, some of my routines are of the same function with those in the CUBLAS. Probably I would take advantage of that library. But I wont build my own. Anyway, thanks a lot for the help.