Compiling the current (CUDA v10.1) Thrust libraries using Windows and Visual Studio results in the following warnings:
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\include\thrust\mr\allocator.h(96,48): warning C4003: not enough arguments for function-like macro invocation 'max' C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\include\thrust/system/cuda/detail/sort.h(1218): warning C4244: 'initializing': conversion from 'Integer' to 'int', possible loss of data C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\include\thrust/system/cuda/detail/reduce.h(939): warning C4244: 'argument': conversion from 'Size' to 'int', possible loss of data
These are not showstoppers. These warnings are generated for templated C++ code, so an annoying snowstorm of warning messages appears when a large CUDA Thrust project is built, but compiled applications nevertheless run without CUDA errors.
The question is whether the results are reliable, since the compiler’s default behavior in each case (replacing missing arguments with “empty” values, truncating larger to smaller data types) is correct for the logic in the Thrust code. Or are there corner cases where null arguments or data truncation would lead to incorrect results?