run CUDA applications on other computers

Hello,

I would like to test my CUDA application on other computers which do not have installed the CUDA libraries. I searched in the NVCC documentation and I think the right option would be -run to include all libraries in the .exe; however, I get following error message:

[codebox]nvcc -run -c -g -I…/…/NVIDIA_CUDA_SDK/common/inc -o build/Run/GNU-Generic/Michelogram_cc.o Michelogram_cc.cc

nvcc fatal : More than one compilation phase specified[/codebox]

The only compilation cotion I specifed is the -run option, I was searching in the make file if the -compile option is set there, but I couldn’t find anything.

So, first of all, is the -run option the right way to go to get one executive file with all libraries?

And, why do I get this error?

If you program your application using the driver API only (not the runtime API), then as long as they have the correct nVidia video driver installed and CUDA-enabled hardware in the machine, your application will work without installing anything else. Note: make sure that they’ve got the video driver installed that supports (at minimum) the version of CUDA that you are using (1.1, 2.0, 2.1 beta, etc.)

No.

The only library you need is cudart.dll. Just put it in the same directory as your exe. But if the host computer doesn’t have a CUDA capable card or CUDA drivers, then it’s a whole other (more complicated) issue .

Thanks for the replies. The computers has a Cuda capable card. I would like to test the computation time on different praphic cards, but all are CUDA capable.

So, basically I have to copy the cudart library in the the same directory as the application. Then the video drivers need to support the CUDA version I used, and I need to install the CUDA drivers on the computers?

Is that all or is there anything else?

The standard nVidia drivers (the recent versions, anyway) contain built-in support for CUDA, if you are using the driver API (hence the name). It’s only if you are writing your program with the runtime API that you need to redistribute cudart.dll along with your program.

Other than that, you’ve got it right.

I copied the cudart.so to the directory where my executive file is, however the application does not find the library. I was looking for a compile option to define a library directory during runtime. However, I can’t see the option in the nvcc documentation.

Anyone can say me the option where I can define the directory?

The LD_LIBRARY_PATH environment variable.

Is there any manual on the different usages of the driver api and the runtime api?

I also like to create a project that’s able to run on other systems which only have the (correct) nvidia driver installed (both Windows and Linux), but I couldn’t find any information about this in the documentation, where should I be looking for this?

Thanks.

Possible that “-run” and “-c” are NOT compatible. Thats what I can infer(with some guess) from the error message.

Check the Programming Guide. The way I look at it, the runtime API provides some shortcuts to simplify your code development at the cost of requiring cudart.dll to be distributed with your application. If you want your app to be able to run on any CUDA-enabled device, use the driver API (though you will have to do a little more work to get your program working properly).

Where do you get your assertion that driver api runs on more devices than runtime api?

The way I see it the Driver API is for strict backward-compatibility with the C standard and third-party compilers. This is important for a small minority of people. However, Driver API doesn’t add any power or expressivity despite being “low-level.” (Which is something people need to understand.) In fact it is missing huge chunks of features vs the Runtime API, while also being a lot more verbose and needlesly complicating to the codebase. No one should be using the Driver API unless they have a very good reason.