Calling / Expanding Macros in device code.

I have a program which i want to rewrite in cuda. But it uses another library. Most of the code that i want to implement in my own version of program depend on this library. The program would call the various macros which in turn call other macros in the dependency library. Is it possible for me to just change only the necessary functions in the program (with the device functions still calling the Macros declared for the host) or do I have to manually expand them at each level all into the same device function (the function which I want to make or alter as a kernel)?

Since macros are ‘expanded’, instead of being ‘called’, this is a bit confusing.


You are right that macro expansion does not conflict with CUDA. However, you should be prepared for the expanded macros to contain things that don’t work well with CUDA. Also the library might contain unrelated stuff that does not compile with CUDA.

My recommendation would be to first extract the relevant macros into their own header file (license permitting), make sure the CPU version of your program still works with just that header but without the library and that you understand the expanded macros. Then you are well prepared to port this to CUDA.