I would like to test my CUDA application on other computers which do not have installed the CUDA libraries. I searched in the NVCC documentation and I think the right option would be -run to include all libraries in the .exe; however, I get following error message:
If you program your application using the driver API only (not the runtime API), then as long as they have the correct nVidia video driver installed and CUDA-enabled hardware in the machine, your application will work without installing anything else. Note: make sure that they’ve got the video driver installed that supports (at minimum) the version of CUDA that you are using (1.1, 2.0, 2.1 beta, etc.)
The only library you need is cudart.dll. Just put it in the same directory as your exe. But if the host computer doesn’t have a CUDA capable card or CUDA drivers, then it’s a whole other (more complicated) issue .
Thanks for the replies. The computers has a Cuda capable card. I would like to test the computation time on different praphic cards, but all are CUDA capable.
So, basically I have to copy the cudart library in the the same directory as the application. Then the video drivers need to support the CUDA version I used, and I need to install the CUDA drivers on the computers?
The standard nVidia drivers (the recent versions, anyway) contain built-in support for CUDA, if you are using the driver API (hence the name). It’s only if you are writing your program with the runtime API that you need to redistribute cudart.dll along with your program.
I copied the cudart.so to the directory where my executive file is, however the application does not find the library. I was looking for a compile option to define a library directory during runtime. However, I can’t see the option in the nvcc documentation.
Anyone can say me the option where I can define the directory?
Is there any manual on the different usages of the driver api and the runtime api?
I also like to create a project that’s able to run on other systems which only have the (correct) nvidia driver installed (both Windows and Linux), but I couldn’t find any information about this in the documentation, where should I be looking for this?
Check the Programming Guide. The way I look at it, the runtime API provides some shortcuts to simplify your code development at the cost of requiring cudart.dll to be distributed with your application. If you want your app to be able to run on any CUDA-enabled device, use the driver API (though you will have to do a little more work to get your program working properly).
Where do you get your assertion that driver api runs on more devices than runtime api?
The way I see it the Driver API is for strict backward-compatibility with the C standard and third-party compilers. This is important for a small minority of people. However, Driver API doesn’t add any power or expressivity despite being “low-level.” (Which is something people need to understand.) In fact it is missing huge chunks of features vs the Runtime API, while also being a lot more verbose and needlesly complicating to the codebase. No one should be using the Driver API unless they have a very good reason.