As you may know , this is a recurring problem with VS versions throughout the years.
Before opening this topic I did some research and followed the instructions of this
topic
https://devtalk.nvidia.com/default/topic/1027876/why-does-atomicadd-not-work-with-doubles-as-input-/
but unfortunately neither of those solutions worked for me.
My system specs : GTX 1060 6GB so ensured compute capability 6.1
VS Enterprise 2017 15.9.5 , with CUDA 10.0
Finally the compilation command and error output :
C:\Users\Tyr\source\repos\Test2\Test2>“C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0\bin\nvcc.exe” -gencode=arch=compute_61,code="sm_61,compute_61" --use-local-env -ccbin
“C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\VC\Tools\MSVC\14.16.27023\bin\HostX86\x64” -x cu -I
“C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0\include” -I
“C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0\include” -G --keep-dir x64\Debug -maxrregcount=0 --machine 64 --compile
-cudart static -g -DWIN32 -DWIN64 -D_DEBUG -D_CONSOLE -D_MBCS -Xcompiler "/EHsc /W3 /nologo /Od /Fdx64\Debug\vc141.pdb /FS /Zi /RTC1 /MDd " -o x64\Debug\kernel.cu.obj “C:\Users\Tyr\source\repos\Test2\Test2\kernel.cu”
1>C:/Users/Tyr/source/repos/Test2/Test2/kernel.cu(46): error : no instance of overloaded function “atomicAdd” matches the argument list