Using nvcc with the latest version of LLVM/clang

Hi,

I guess this is a bit more of a “bug report” than a help request. (As by now I do know how to work around this issue.) Also, “bug report” is in quotes as the thing not working is not actually something that NVIDIA would’ve promised would work…

When using clang++ as the host compiler of nvcc on Linux, Clang 15 and the pre-release 16 are not officially supported by nvcc, even in the latest CUDA version. But while “officially” they are not supported, in practice Clang 15 works fine for all intents and purposes. (As long as Clang itself uses a version of libstdc++ that itself is supported by nvcc.) To make nvcc work correctly with Clang 15, one “just” needed to use -allow-unsupported-compiler as an nvcc argument.

But in the run-up to Clang 16 things broke. :-( Pre-release versions of Clang 16 could no longer be used as the host compiler, even if using -allow-unsupported-compiler. One would see errors like:

 > nvcc -ccbin=`which clang++` -allow-unsupported-compiler ./helloWorld.cu
/usr/lib/gcc/x86_64-redhat-linux/11/../../../../include/c++/11/ext/type_traits.h(193): error: parameter pack "_Tp" was referenced but not expanded

/usr/lib/gcc/x86_64-redhat-linux/11/../../../../include/c++/11/ext/type_traits.h(193): error: expected an expression
...

With some help from “a competitor” (see: CUDA no longer compatible with Intel Clang · Issue #8037 · intel/llvm · GitHub) it turned out that this is caused by clang++ switching to using C++17 as its default dialect. (From the previous C++14.) This uncovered that nvcc does not correctly pass the C++ standard selection to the host compiler when you don’t ask nvcc itself explicitly for a specific C++ standard. (See the GitHub issue for a lot better explanation of this.)

So… right now I can work around this issue by always calling on nvcc with an explicit -std=c++XX flag. And I plan to ask the CMake developers to improve the CUDA detection code in CMake like this as well. (Resurrecting: CUDA not working with IntelLLVM host compiler (2023.01.13.) (#24320) · Issues · CMake / CMake · GitLab) But it would also make sense to fix this in CUDA itself. To make nvcc explicitly ask for specific C++ standards from its host compiler, under all circumstances.

Hopefully this will reach the appropriate NVIDIA developer(s). ;-)

Cheers,
Attila

nvcc in cuda 12.1 supports clang 16 now, But it falls on clang 17 as well😂. Do you have any clue about these compilation errors? helps appreciated!

/home/wangjingbo/cpp/bin/../include/c++/v1/__type_traits/remove_const.h(23):
  error: identifier "__remove_const" is undefined

      using type __attribute__((__nodebug__)) = __remove_const(_Tp);
                                                ^

  


  /home/wangjingbo/cpp/bin/../include/c++/v1/__type_traits/remove_const.h(27):
  error: identifier "__remove_const" is undefined

    using __remove_const_t = __remove_const(_Tp);
                             ^

  


  /home/wangjingbo/cpp/bin/../include/c++/v1/__type_traits/remove_volatile.h(23):
  error: identifier "__remove_volatile" is undefined

      using type __attribute__((__nodebug__)) = __remove_volatile(_Tp);
                                                ^

  


  /home/wangjingbo/cpp/bin/../include/c++/v1/__type_traits/remove_volatile.h(27):
  error: identifier "__remove_volatile" is undefined

    using __remove_volatile_t = __remove_volatile(_Tp);
                                ^

  

  /home/wangjingbo/cpp/bin/../include/c++/v1/__type_traits/remove_cv.h(25):
  error: identifier "__remove_cv" is undefined

      using type __attribute__((__nodebug__)) = __remove_cv(_Tp);
                                                ^