[resolved] CUDA 8 nvcc uses

is there a way for me to force nvcc to use something other than the system compiler?

I understand that Fedora 24 is not officially supported with CUDA 8, but long story short I really don’t want to revert to Fedora 23 if at all possible.

I tried a lot of different things, compiling GCC 4.9.2 as well as 5.3.0. Fedora ships with gcc 6 now, which is unsupported. I’m not really sure how the whole stack works, but after installing Intel’s compiler (16 update 4 or something) I realized that icc / icpc actually just wrap around the system compiler.

I installed clang 3.8 and have been trying to get things working with that, since I thought I just messed up compiling gcc, but I realize now that what happens is the system gcc is being used for only compiling the device code. I also don’t really understand clang / llvm, it seems like it could also be pulling an intel and wrapping around the system gcc?

I guess the point of the question is really just in general, making sure the compiler used for host code is the same as device code. It seems strange to me that you can change the host compiler, but then get a bunch of conflicts when device code is compiled since it immediately grabs gcc 6’s include files and fails on a bunch of redefinition errors.

Initially, I thought the issue was I didn’t find the right combination with CMake, but I made a reduced example and wrote a Makefile with some ifdefs that error if GNUC is defined, and they only fail when the device code is compiled – not when pure C++ code is compiled since clang is getting used.

Alternatively, is this something that is configured at the time cuda is installed and/or is there a way for me to have CUDA compiled with a different (i.e. clang) compiler?

I am sorry if this is not the right place to ask this question, there were a lot of potential causes for this scenario but I feel that the issue is with CUDA, not CMake, or custom gcc compilations (?..still a novice…).

It seems like this may even be a feature? Otherwise CMake would probably expose something like CUDA_DEVICE_COMPILER and set it to CUDA_HOST_COMPILER?

Thank you for any impressions / suggestions / help!

If you leave CMake out of it, it should be entirely possible to build CUDA codes using e.g. gcc 5.3.1 toolchain on Fedora 24 with CUDA 8.

Yes, you have to install the “older” gcc toolchain if it’s not already available.

Therafter, you can specify to nvcc to use your desired toolchain with the -ccbin switch. This switch applies to both host and device compilation paths. (To be clear, the device path doesn’t use the host compiler, but there may be some toolchain dependencies due to the preprocessor includes, etc.)

I wouldn’t be able to address any questions about how to make this all work with CMake. As far as I am concerned, that is a separate toolchain, and I can’t say much about it.

For mixed project makefiles, where you may be using the host compiler directly to compile e.g. .cpp files, you would need to specify the same particular version of gcc that you are specifying to nvcc via the -ccbin switch.

Just to clarify, do I need to re-install CUDA 8? Or as long as I’ve setup the alternative GCC it should be good.

I stripped down one of the Makefiles provided in the samples and am doing what you said (with -ccbin)

HOST_COMPILER ?= clang++
NVCC          := $(CUDA_PATH)/bin/nvcc -ccbin $(HOST_COMPILER)
ALL_CCFLAGS += -std=c++11 -I$(CUDA_PATH)/samples/common/inc

however, when I did this with g+±53 or g+±49, compilation of the .cu files would still fail on redefinition errors deriving from gcc 6.2’s paths.

completely agree about CMake, won’t cross that bridge until i get it to compile with make. but in theory (based off looking at the source) it does the same thing, as well as I can manually supply -ccbin

The -ccbin switch is used to specify the compiler bindir directory. (read the nvcc documentation)

So if you properly specify a directory containing only gcc 5.3.1, there is no way nvcc should be picking up any gcc 6.x bits.

Beyond that, I’m not sure I can help. You would need to provide a concise, complete example if it is not working.

If you specify CC and CXX upon running CMake (at least for the initial setup; I haven’t tried doing it afterwards), nvcc will end up using that one. For example, if you run CC=/usr/bin/gcc-4.9 CXX=/usr/bin/g++-4.9 cmake ~/my_project, nvcc will use gcc-4.9. I assume that the FindCUDA CMake module takes care behind the scene of properly setting up the environment for nvcc.

Thank you all for your responses. While I agree with what you have said (with proper setting of either -ccbin or CC and CXX combo) should be valid, there is a lot more going on behind the scenes.

I learned of an interesting tool called spack that was able to connect the dots for me, more specifically prepend EVERYTHING that was necessary from earlier GCC builds. Turns out there is a lot more than just CC and CXX, which in hindsight isn’t surprising.

So if you are reading this and are trying to get CUDA 8 compiling on an unsupported platform, but your unsupported platform has a “too new” gcc, you can try using spack: https://github.com/LLNL/spack

Overall an extremely easy tool to use, and very active and helpful community.

Ultimately, I bit the bullet and reverted back to fc23 because there are fundamental problems with using an older gcc. The majority of my desktop manager, related settings, and many other things required GLIBCXX from 6.2. It was an interesting challenge, but as soon as I realized what was actually happening it was basically install a new desktop manager (and hope that it actually worked…), or just revert.

In summary, at a high level, CUDA 8 + CMake seem to behaving as they are supposed to. I made a somewhat catastrophic mistake of setting enough variables to think it was right, but not enough for it to say find the right libstdc++ for the specified compiler. I never tested this with clang, but would assume the same is true.

Thank you all very much for your answers, I wanted to verify as much as I could before posting.