Given libcudart is less than a megabyte, I suspect the advantages of
linking to the shared library on a host with more
than a gigabyte of RAM are negligible.

My reason for raising this is I recently had a potential user give up
because his LD_LIBRARY_PATH did not include /usr/local/cuda/lib64/
instead I think he got an error like:
error while loading shared libraries: cannot open
shared object file: No such file or directory

Is there any downside to linking the program directly to the CUDA
runtime library?
Can this readily be done via the Makefile?
Is there any unexpected downside?
Is there a better solution?

As always any comments or guidance would be most welcome


Thanks for posting this - I am writing this comment just to test the Forum system

What you are referring to is called static linking.

If you compile (and link) with nvcc, static linking against cudart is the default on CUDA 7 and has been so for a while. If you link with g++ I think you can observe the same by linking against the .a static library instead of the .so dynamic library.

A benefit is that the machine need not have that CUDA library installed. A downside is the increased binary size.

For nvcc you can control the default linking behavior with the -cudart switch:

Dear Bob,
Thank you for your reply, especially the -cudart switch.

What I was worrying about was the impact on the user.
At present it is very hard to get any feedback from users
(especially people who decide not to use it). If the program
image is 400Kbytes bigger but is easier for the non-CUDA expert
to use, that sounds like an overall win?

Has anyone else seen problems with users failing to set LD_LIBRARY_PATH?
BarraCUDA is on SourceForce. Has anyone advice on getting users
to give feedback?

Thanks again
ps: thanks to Nadeemm for re-posting for me. There was a problem
with Centos FirefoxESR 38.2.0 yesterday