Ubuntu 20.04, GCC 9.3, Cuda Toolkit 11.3 - not a supported combination?

I am compiling a C++ library with CUDA kernels inside on Ubuntu 20.04.

My GCC compiler version which I use is GCC 9.3 (I think that is the standard version for Ubuntu 20.04).
I am using CUDA Toolkit 11.3

When I am compiling the ‘.cu’ files (with the CUDA kernels) with NVCC, I get the following error:
“error: argument list for class template “std::pair” is missing”.
The error occurs in file ‘/usr/include/c++/9/bits/stl_pair.h’.
I don’t know whether the error comes from the host (GCC) or device (NVCC) compiler.

I googled a bit, and according to the link [1] this error indicates that the GCC and Toolkit version are NOT compatible.
So I am wondering whether GCC 9.3 and Toolkit 11.3 are compatible on Ubuntu 20.04 or not .

Side note: I get also warnings because I am including the CUDA header files ‘device_functions.h’ and ‘math_functions.h’, but I think that is not related to the issue.

References:
[1] Compiler Error: Rust-cc and Cuda nvcc - "std::pair" is missing - Stack Overflow

According to the installation guide for cuda 11.3, Ubuntu 20.4.1 and GCC 9.3 are supported.

The CUDA toolchain automatically includes CUDA-specific header files when compiling .cu files, so manual inclusion of CUDA-specific header files should not be necessary and should in fact be avoided. That has been true since the very beginning of CUDA. The rationale for the auto-inclusion was that the hurdle when moving from plain C/C++ to CUDA should be as low as possible.

Inclusion of cuda_runtime.h is necessary when accessing CUDA APIs from regular C++ code, of course.

Updating from CUDA Toolkit 11.3 to 11.5 did not solve the issue. Will investigate a bit deeper the issue. Where are all the output log files produced by the different components of NVCC (host compiler, device compiler, ptxas) ? I want to check them.

It’s messy (lots of files), but you can easily see all those output files by specifying the keep option to nvcc. the verbose option might also be of interest.

As usual, excellent support here by NVIDIA (Robert) and others (njuffa etc.), thanks ! That’s one thing I like on CUDA.

I have added the --verbose option already, thanks for pointing out the additional ‘keep’ option.
Is there some explanation which log file belongs to which component ?
Specifically, where goes the log output of the host compiler code, and where goes the log output of the device compiler code to ?

For that, if it were me, I would have to parse the verbose output. For example, if you say “device compiler” my best guess is you are referring to ptxas (converts PTX to SASS). In that case, I would look through the verbose output for a command line that begins with ptxas ... That command line should show all the input and output file names used by that command. I don’t know of an a-priori decoder ring and such a decoder ring would probably be harder to parse anyway, given all the temp file naming conventions used.

Below is the output of the respective NVCC step where the error occurs.
I have a file ‘Arithmetic.cu’ which is processed by NVCC.
I compile only for Compute capability 8.0 (machine code).

The strange thing is that even when I delete everything in Arithmetic.cu (so the file is completely empty), I get the error below… Any ideas by someone ? I am wondering also what the ‘cicc’ component actually is. I also am not sure whether I should change the ‘–c++17’ compiler flag in the call of ‘cc’ compiler to ‘–c++14’ (so that it matches with the flag in ‘cicc’ call) or remove it ? Any help or suggestions are appreciated.

– Output from NVCC (verbose mode) –

-- Generating /storage/fah1/project/common/libs/Cuda/CudaIplAlg/build_gcc93_ub2004x64_Debug/CMakeFiles/cuda_compile_1.dir/src/Impl/./cuda_compile_1_generated_Arithmetic.cu.o
/usr/local/cuda/bin/nvcc /storage/fah1/project/common/libs/Cuda/CudaIplAlg/src/Impl/Arithmetic.cu -c -o /storage/fah1/project/common/libs/Cuda/CudaIplAlg/build_gcc93_ub2004x64_Debug/CMakeFiles/cuda_compile_1.dir/src/Impl/./cuda_compile_1_generated_Arithmetic.cu.o -ccbin /bin/cc -m64 -Xcompiler ,\"-g\",\"-g\" -Xcompiler=-march=corei7-avx -Xcompiler=-fpermissive -Xcompiler=-fno-strict-aliasing -Xcompiler=-std=c++17 -Xcompiler=-Wno-narrowing -gencode arch=compute_80,code=sm_80 --verbose --keep --keep-dir /storage/fah1/project/common/libs/Cuda/CudaIplAlg/build_gcc93_ub2004x64_Debug/nvcc_output_log_files -D_DEBUG -DJRS_ARCH64 -DJRS_UNIX -DHAVE_SSE -DHAVE_SSE2 -DJRS_OS_ID=ub2004x64 -DJRS_OS_ID_STR=\"ub2004x64\" -DJRS_LIBRARY_VER_MAJOR=3 -DJRS_LIBRARY_VER_MINOR=0 -DJRS_LIBRARY_VER_COMPOSED=VER_3_0 -DCUDAIPLALG_EXPORTS -DJRS_CUDA_TOOLKIT_VERSION=1150 -G -g -DNVCC -I/usr/local/cuda/include -I/storage/fah1/project/common/libs/IplWavelet3.0/include -I/storage/fah1/project/common/libs/Baselib3.0/include -I/storage/fah1/project/common/libs/IplWithIpp3.0/include -I/storage/fah1/project/common/libs/IplBase3.0/include -I/storage/fah1/project/common/libs/IplAlg3.0/include -I/storage/fah1/project/common/libs/Cuda/CudaBase3.0/include -I/storage/fah1/project/common/libs/Cuda/CudaCV3.0/include -I/storage/fah1/project/common/libs/Cuda/CudaCVCoreNG2.0/include -I/storage/fah1/project/common/libs/Cuda/CudaCVCoreNGBridge2.0/include -I/storage/fah1/project/common/libs/Cuda/CudaIplWithNPP3.0/include -I/storage/fah1/project/common/libs/Cuda/CudaIplAlg/include
#$ _NVVM_BRANCH_=nvvm
#$ _SPACE_= 
#$ _CUDART_=cudart
#$ _HERE_=/usr/local/cuda/bin
#$ _THERE_=/usr/local/cuda/bin
#$ _TARGET_SIZE_=
#$ _TARGET_DIR_=
#$ _TARGET_DIR_=targets/x86_64-linux
#$ TOP=/usr/local/cuda/bin/..
#$ NVVMIR_LIBRARY_DIR=/usr/local/cuda/bin/../nvvm/libdevice
#$ LD_LIBRARY_PATH=/usr/local/cuda/bin/../lib:
#$ PATH=/usr/local/cuda/bin/../nvvm/bin:/usr/local/cuda/bin:/usr/bin:/usr/bin/bin:/sbin:/bin:/usr/local/bin:/snap/bin
#$ INCLUDES="-I/usr/local/cuda/bin/../targets/x86_64-linux/include"  
#$ LIBRARIES=  "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib/stubs" "-L/usr/local/cuda/bin/../targets/x86_64-linux/lib"
#$ CUDAFE_FLAGS=
#$ PTXAS_FLAGS=
#$ "/bin"/cc -D__CUDA_ARCH__=800 -D__CUDA_ARCH_LIST__=800 -E -x c++  -DCUDA_DOUBLE_MATH_FUNCTIONS -D__CUDACC__ -D__NVCC__ -D__CUDACC_DEBUG__  "-g" "-g" -march=corei7-avx -fpermissive -fno-strict-aliasing -std=c++17 -Wno-narrowing -I"/usr/local/cuda/include" -I"/storage/fah1/project/common/libs/IplWavelet3.0/include" -I"/storage/fah1/project/common/libs/Baselib3.0/include" -I"/storage/fah1/project/common/libs/IplWithIpp3.0/include" -I"/storage/fah1/project/common/libs/IplBase3.0/include" -I"/storage/fah1/project/common/libs/IplAlg3.0/include" -I"/storage/fah1/project/common/libs/Cuda/CudaBase3.0/include" -I"/storage/fah1/project/common/libs/Cuda/CudaCV3.0/include" -I"/storage/fah1/project/common/libs/Cuda/CudaCVCoreNG2.0/include" -I"/storage/fah1/project/common/libs/Cuda/CudaCVCoreNGBridge2.0/include" -I"/storage/fah1/project/common/libs/Cuda/CudaIplWithNPP3.0/include" -I"/storage/fah1/project/common/libs/Cuda/CudaIplAlg/include" "-I/usr/local/cuda/bin/../targets/x86_64-linux/include"    -D "_DEBUG" -D "JRS_ARCH64" -D "JRS_UNIX" -D "HAVE_SSE" -D "HAVE_SSE2" -D "JRS_OS_ID=ub2004x64" -D "JRS_OS_ID_STR=\"ub2004x64\"" -D "JRS_LIBRARY_VER_MAJOR=3" -D "JRS_LIBRARY_VER_MINOR=0" -D "JRS_LIBRARY_VER_COMPOSED=VER_3_0" -D "CUDAIPLALG_EXPORTS" -D "JRS_CUDA_TOOLKIT_VERSION=1150" -D "NVCC" -D__CUDACC_VER_MAJOR__=11 -D__CUDACC_VER_MINOR__=5 -D__CUDACC_VER_BUILD__=50 -D__CUDA_API_VER_MAJOR__=11 -D__CUDA_API_VER_MINOR__=5 -D__NVCC_DIAG_PRAGMA_SUPPORT__=1 -include "cuda_runtime.h" -m64 -g "/storage/fah1/project/common/libs/Cuda/CudaIplAlg/src/Impl/Arithmetic.cu" -o "/storage/fah1/project/common/libs/Cuda/CudaIplAlg/build_gcc93_ub2004x64_Debug/nvcc_output_log_files/Arithmetic.cpp1.ii" 
#$ cicc --c++14 --gnu_version=90300 --display_error_number --orig_src_file_name "/storage/fah1/project/common/libs/Cuda/CudaIplAlg/src/Impl/Arithmetic.cu" --orig_src_path_name "/storage/fah1/project/common/libs/Cuda/CudaIplAlg/src/Impl/Arithmetic.cu" --allow_managed --debug_mode   -arch compute_80 -m64 --no-version-ident -ftz=0 -prec_div=1 -prec_sqrt=1 -fmad=1 --include_file_name "Arithmetic.fatbin.c" -g -O0 -tused --gen_module_id_file --module_id_file_name "/storage/fah1/project/common/libs/Cuda/CudaIplAlg/build_gcc93_ub2004x64_Debug/nvcc_output_log_files/Arithmetic.module_id" --gen_c_file_name "/storage/fah1/project/common/libs/Cuda/CudaIplAlg/build_gcc93_ub2004x64_Debug/nvcc_output_log_files/Arithmetic.cudafe1.c" --stub_file_name "/storage/fah1/project/common/libs/Cuda/CudaIplAlg/build_gcc93_ub2004x64_Debug/nvcc_output_log_files/Arithmetic.cudafe1.stub.c" --gen_device_file_name "/storage/fah1/project/common/libs/Cuda/CudaIplAlg/build_gcc93_ub2004x64_Debug/nvcc_output_log_files/Arithmetic.cudafe1.gpu"  "/storage/fah1/project/common/libs/Cuda/CudaIplAlg/build_gcc93_ub2004x64_Debug/nvcc_output_log_files/Arithmetic.cpp1.ii" -o "/storage/fah1/project/common/libs/Cuda/CudaIplAlg/build_gcc93_ub2004x64_Debug/nvcc_output_log_files/Arithmetic.ptx"
/usr/include/c++/9/bits/stl_pair.h(442): error: argument list for class template "std::pair" is missing
/usr/include/c++/9/bits/stl_pair.h(442): error: expected a ")"
/usr/include/c++/9/bits/stl_pair.h(442): error: template parameter "_T1" may not be redeclared in this scope
/usr/include/c++/9/bits/stl_pair.h(442): error: expected a ";"
4 errors detected in the compilation of "/storage/fah1/project/common/libs/Cuda/CudaIplAlg/src/Impl/Arithmetic.cu".

My intuition seems to be correct :-) Removing the ‘-std=c++17’ compiler flag from the gcc call seems to eliminate the error.
Then GCC 9.3 falls back to its ‘default’ C++ standard, which is C++ 14.
The default C++ standard version for a specific version can be found out via the command given in the last answer at c++11 - How to determine what C++ standard is the default for a C++ compiler? - Stack Overflow

So it seems that NVCC does NOT like when GCC does not use its default C++ standard.
One advice from me to the NVCC compiler team would be to check for this, and trigger a better error message … That would be really great also for other people… It was kinda difficult for me to find the root cause for this error.

You should never do that with nvcc. (By the way this is the first time in this thread where you have actually indicated the command you are using, from what I can see.)

If you want to specify an alternate c++ standard, do so with the appropriate switch from the nvcc command line. This keeps the host compiler and the components provided by the NVIDIA toolchain in sync.

here are some proper usage examples:

nvcc -std=c++11 ...
nvcc -std=c++14 ...
nvcc -std=c++17 ...

For older versions of nvcc that may not support a particular standard (e.g. -std=c++17) the solutions are:

  1. upgrade to a newer version of nvcc (or)
  2. segregate the host code sections that depend on the newer c++ standard not yet explicitly supported by nvcc into their own .cpp files and compile those directly with the host compiler. Use wrapper functions as needed to tie functionality together.
1 Like

Many thanks. I forget about the -Xcompiler=-std=c++17 switch because it is added automatically in our workflow. We have a custom CMAKE workflow where each GCC compiler flag <gcc_flag> is automatically propagated to NVCC (host) compiler flag ‘-Xcompiler=<gcc_flag>’ That is useful to keep them in sync for flags like eg. -Wno-narrowing or -fpic etc. But I should skip this propagating for the ‘-std=XYZ’ flags.

I am not sure whether this requirements of keeping both compilers ‘in sync’ (with respect to C++ standard) is mentioned in the NVCC documentation. Or I suppose I overread it.

I suppose the situation is the same on Windows with Visual Studio (2017, 2019), or ?

I don’t know that it is stated anywhere. If you feel the docs could be improved you can always file a bug.

I believe so.

FWIW I have filed an internal bug with a suggestion to make an addition to this section of the nvcc manual to clarify that that is the only correct method to select the dialect of the host compiler from the nvcc command line. I cannot say if it will be incorporated or when. I won’t be able to respond to further questions about the status of that bug.