How to use the static option with g ++ used by nvcc?

I writed the makefile to compile the foo.cu.

CC = g++
ARCH=sm_35
foo : foo.cu Makefile
        nvcc -std=c++11 -ccbin=$(CC) foo.cu -arch=$(ARCH) -o foo
.PHONY: clean
clean :
        rm -f foo

I’d like to use the static option with g++ in this.
I don’t care about g++ and cuda versions. (Now I’m using g++ 5.3.0 and cuda 9.0.)

Please help me.
Thank you.

what do you mean by the “static option”?

do you mean linking against cudart statically using g++?

Thank you for your reply.

do you mean linking against cudart statically using g++?
Yes. Simply writing, it means that I want

CC = g++

to be

CC = g++ -static

.

In general, you can pass host compiler options through nvcc with the -Xcompiler flag, in this case: -Xcompiler -static. However, as I recall (vaguely), -static is a flag controlling the link rather than the compilation step, so I am not sure this will have the desired effect in the context of the nvcc compiler driver. Worth a try, I’d say.

This might work better if you use separate compile and link steps in your build. Most non-trivial makefiles do that anyhow.

To statically link the CUDA runtime, you can use the --cudart flag in nvcc:

nvcc --cudart=static -o test test-cu

This however should already be set as the default in nvcc, so you shouldn’t need this. For statically linking the whole binary, including glibc, -Xcompiler -static should work, either at your link phase or as an extra flag if you are building and compiling all at once:

nvcc -Xcompiler -static -o test test.cu

This successfully compiles for me (CUDA 9 + GCC 6), but note the following warning:

/opt/cuda/bin/..//lib64/libcudart_static.a(libcudart_static.a.o): In function `cudart::globalState::loadDriverInternal()':
(.text+0x1b075): warning: Using 'dlopen' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking

As the CUDA driver library (/usr/lib/libcuda.so) on any host system your built program is run on is unlikely to be built against the same version of glibc that’s bundled in your statically linked program, so if I understand correctly, your program may have unexpected behaviour as soon as it tries to initialize the driver / run any CUDA functions. I have version 387.34 of the NVIDIA drivers and that seems to be built against glibc 2.2.5, and I have glibc 2.26.0 on my host system (which gets built into anything I build with -static) and running a statically-linked test program segfaults as soon as it tries to launch a kernel.

Because of this problem, static linking the entire binary is unlikely to be very useful with CUDA, unless someone else knows of a way around this. You’re probably best dynamically linking with glibc, statically linking with the CUDA runtime library (which is the default anyway) and manually statically linking in individual non-glibc third-party libraries like so:

nvcc -o test test.cu -l:libproj.a # replace -lXXX with -l:libXXX.a for each library

instead of

nvcc -o test test.cu -lproj

A couple other things you could statically link in are libgcc and libstdc++, which can be included as so:

g++ -static-libgcc -static-libstdc++

So with nvcc’s -Xcompiler added, altogether you may end up with something like this:

nvcc -o test test.cu -Xcompiler -static-libgcc -Xcompiler -static-libstdc++ -l:libproj.a -l:libfoo.a

For me, this strips out the dynamic library dependencies to just this:

% ldd test
        linux-vdso.so.1 (0x00007fff74699000)
        librt.so.1 => /lib64/librt.so.1 (0x00007fd1cbae9000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fd1cb8c9000)
        libdl.so.2 => /lib64/libdl.so.2 (0x00007fd1cb6c5000)
        libm.so.6 => /lib64/libm.so.6 (0x00007fd1cb382000)
        libc.so.6 => /lib64/libc.so.6 (0x00007fd1cafbf000)
        /lib64/ld-linux-x86-64.so.2 (0x00007fd1cbfb3000)
1 Like