nvcc compiler flags in visual studio

hey, I want to add the --expt-extended-lambda flag to my visual studio project. I tried that:


with this code:

But as you can see, I still gets an error. Where should I put my nvcc compiler flags?

If it were me, I would inspect the actual compile command line in the console output to see what is happening.

And I would not set the error list to Build+Intellisense, that is confusing IMO. I would set it to just Build.

I can’t figure out what is wrong from what you have shown. That is the joy of working in an IDE that hides much of the project organization behind opaque configuration screens.

Maybe someone else can help you.

1>------ Rebuild All started: Project: GpuNeuralNetwork, Configuration: Debug x64 ------
1>
1>{SolutionDir}>"nvcc.exe" -ccbin "C:\Program Files (x86)\Microsoft Visual Studio017\Community\VC\Tools\MSVC4.16.27023\bin\HostX86\x64" -x cu  -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0\include" -I"C:\vcpkg\installed\x64-windows\include" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0\include"  -G   --keep-dir x64\Debug -maxrregcount=0  --machine 64 --compile  --expt-extended-lambda  -g   -DWIN32 -DWIN64 -D_DEBUG -D_CONSOLE -D_MBCS -Xcompiler "/EHsc /W3 /nologo /Od  /FS /Zi /RTC1 /MDd " -o x64\Debug\Kernals.cuh.obj "{SolutionDir}\Kernals.cuh" -clean
1>Kernals.cuh
1>Compiling CUDA source file Kernals.cuh...
1>
1>{SolutionDir}>"nvcc.exe" -gencode=arch=compute_35,code=\"sm_35,compute_35\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio017\Community\VC\Tools\MSVC4.16.27023\bin\HostX86\x64" -x cu  -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0\include" -I"C:\vcpkg\installed\x64-windows\include" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0\include"  -G   --keep-dir x64\Debug -maxrregcount=0  --machine 64 --compile -cudart static --expt-extended-lambda -g   -DWIN32 -DWIN64 -D_DEBUG -D_CONSOLE -D_MBCS -Xcompiler "/EHsc /W3 /nologo /Od /Fdx64\Debug\vc141.pdb /FS /Zi /RTC1 /MDd " -o x64\Debug\Kernals.cuh.obj "{SolutionDir}\Kernals.cuh"
1>Kernals.cuh
1>main.cpp
1>{SolutionDir}\utils.h(51): fatal error C1189: #error:  "please compile with --expt-extended-lambda"
1>Done building project "GpuNeuralNetwork.vcxproj" -- FAILED.
========== Rebuild All: 0 succeeded, 1 failed, 0 skipped ==========

looks fine to me, I jest changed my real solution dir to {solutionDir} (for my privacy)

Changed.

Any other IDE suggestion would be welcomed :)

Thank you for the fast response!

You are apparently including this header file in main.cpp

That won’t get compiled with the usual nvcc setup. nvcc passes that off to the host compiler, and the host compiler doesn’t know anything about the __CUDACC flags, nor does it know what a CUDA experimental extended lambda is.

Line 58 in your original posting (the kernel call) also wouldn’t compile properly as part of main.cpp

Your project structure is broken. This isn’t a nvcc compiler issue or putting that compiler switch in the wrong place.

It’s also pretty bizarre to title a module with Kernals.cuh, but that isn’t the crux of this issue.

This is a problem with your project organization, not anything to do with CUDA, or visual studio, or your compile switch settings. You’re fundamentally trying to include CUDA code in a .cpp module, and that is basically a no-no.

I guess I don’t understand proper cuda project organization.
Where can I learn it?
I use Kernal.cuh because nvcc takes a long time to compile so I wanted to use it as little as possible. If I add -x cu to the project will it be fine?

You can learn it by studying CUDA sample projects.

I can’t tell you what would be fine for your project without studying the project myself, and understanding what are the needs of each module.

You’re fundamentally trying to include CUDA code in a .cpp module, and that is basically a no-no.