CUDA cuInit unresolved external

Hey, I’m dabbling around with the driver CUDA API, and I’m having some issues - the following code produces an unresolved external:

#include "cuda.h"

int main() {
    CUdevice device;
    CUcontext context;
    CUmodule module;
    CUfunction kernel;
    CUresult result;


The code is inside of the main file of the default 12.0 toolkit project - how can I solve this?

you have to link against the cuda (driver API) library


nvcc doesn’t add this automatically for you.

1 Like

Where should I place this argument? I have tried putting it here:
Configuration Properties → CUDA C/C++ → Command Line → Additional Options
But that doesn’t work, note that I double checked the configuration and the active platform, and both are correct. Thanks

I guess you’re using Visual Studio, but you don’t know how to add a library to the link specification for a Visual Studio project?

That methodology isn’t unique or specific to CUDA.

You might study a visual studio CUDA sample project that is targetting the CUDA driver API, such as vectorAddDrv.

Otherwise this may be of interest. The relevant library here is cuda.lib

Would you mind telling me what I need to include/link against if I want to make a project that only uses the driver API (without having to link with the entirety of the CUDA toolkit)?

cuda.lib is the library (on windows).
for includes, it is #include <cuda.h>

I’m sorry, there appears to be some misunderstanding, I was just extending on my question from before, and what I was asking about was something more along the lines of “If I were to create a C++ console project, what steps would I need to take, in order to get an interface with the CUDA driver API working?”. Note that I was able to link against cuda.h, but I’m having trouble with compiling my .cu files with this setup. Regards

Using the driver API (only), you cannot provide CUDA C++ device source code as input. You must first use nvcc (or NVRTC) to compile the CUDA C++ device source code to PTX (or alternatively, CUBIN) format file. Therefore, if you look at a project like vectorAddDrv, and study it carefully, you will observe that there is a compilation process for the .cu file that contains the CUDA C++ kernel definition, and that must be compiled with nvcc (or NVRTC). The compilation step there creates a PTX file which the driver API can read directly. The second compilation process handles the .cpp file which includes actual driver API calls, and that particular compilation process can be done with the host compiler only (nvcc is not required).

I was answering for this second process. When compiling the .cpp file that has for example your main routine in it, and is calling driver API library routines, you can compile that using the host compiler. It only requires linking against cuda.lib and it will require that you include the cuda.h header file.

If you want to understand how to compile CUDA C++ device source code (e.g. CUDA C++ kernel code) to make it “ready” for consumption by the driver API, why not study a sample project like vectorAddDrv and see how the settings are for the .cu file (the kernel code) in that project?

The project settings are fairly simple. You will add the .cu file to the project as a file to be compiled by nvcc, and you will designate that the compilation output be PTX (for example).

If you are asking how to start with a C++ console project that knows nothing about nvcc, and add all the project customizations that are needed to use nvcc properly as a primary compiler (so that, for example, you can compile a .cu file containing a kernel definition), I don’t have a recipe for you. Speaking for myself, personally, I would never attempt that. The integration work has been done for you already if you select the proper project type, and I don’t know of anywhere that NVIDIA documents all the integration needed to make VS and nvcc work together. Furthermore, it is almost certainly specific to a particular version of VS and probably other factors (like CUDA version). Good luck!

(whenever I have done things for which the VS project structure stumps me, I usually revert to using command line compilation on windows. Also not formally documented by NVIDIA, so it requires trial and error, and inspection/study of console output from test cases constructed using the VS project structure.)

(Alternatively, NVRTC obviates the need for the nvcc compilation step, but that goes outside of what is provided only by the driver API. Nevertheless, with NVRTC, it should be possible to handle everything with the host compiler, including compilation of CUDA C++ device source code.)

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.