Command line tools for building cuda kernels?

Are there any command line tools for building cuda Kernels?

I have a very lengthy build chain and would like to automate the compilation and deployment of cuda kernel code.

I cant even seem to build a static lib from the command line without tons of internal errors.

are you talking about windows?

As stated, your requirements are possibly underspecified, especially if your use case involves online compilation. For classical off-line builds, nvcc and makefiles and possibly some scripts are all you need for complex builds and automation, just like in the case of CPU-only build environments. You can find a simple example of how to create a static library of CUDA code in these forums, and may want to spend some quality time with the nvcc documentation.

[Later:] No need to search the forums, NVIDIA provides an example of how to build a static library, it seems:

https://docs.nvidia.com/cuda/cuda-samples/index.html#simple-static-gpu-device-library

Here is a relevant forum thread:

https://devtalk.nvidia.com/default/topic/526645/how-to-create-a-static-lib-using-cuda-5-0-6-5-and-vs2010-problem-solved-and-bug-found-/

Windows64 for sure.

The static lib is what I am trying to build.

Cant seem to get it right out side of the IDE.

Looks like I was able to generate a kernel.cu.obj

This batch file :

setlocal  

CALL "%VS140COMNTOOLS%..\..\vc\vcvarsall.bat" x86_amd64

SET PATH=%PATH%;"C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\bin\x86_amd64\"

"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.2\bin\nvcc.exe" -ccbin "C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\bin\x86_amd64" -x cu  -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.2\include" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.2\include"     --keep-dir x64\Release -maxrregcount=0  --machine 64 --compile      -DWIN32 -DWIN64 -DNDEBUG -D_CONSOLE -D_MBCS -Xcompiler "/EHsc /W3 /nologo /O2 /FS /Zi  /MD " -o x64\Release\kernel.cu.obj %1 -clean 

"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.2\bin\nvcc.exe" -gencode=arch=compute_30,code=\"sm_30,compute_30\" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\bin\x86_amd64" -x cu  -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.2\include" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.2\include"     --keep-dir x64\Release -maxrregcount=0  --machine 64 --compile -cudart static     -DWIN32 -DWIN64 -DNDEBUG -D_CONSOLE -D_MBCS -Xcompiler "/EHsc /W3 /nologo /O2 /FS /Zi  /MD " -o x64\Release\kernel.cu.obj %1

endlocal

Whats the majic to create a lib?

No magic should be required, though it probably helps to have knowledge of how static libraries are built on Windows when building regular CPU code.

I would suggest starting the other way around: First look at the forum thread I linked above and make sure you can replicate the example I gave using the step-by-step process given there. Once that works, start replacing the files used in the example with your own files.

The lib can be created simply:

LIB.EXE /OUT:MYLIB.LIB FILE1.OBJ FILE2.OBJ

Or a dll:

LINK.EXE /DLL /OUT:MYLIB.DLL FILE3.OBJ FILE4.OBJ

However Online compiling and deployment would be a huge benefit.

I cant provide people with an app that adapts its core functionality as per their wishes, without requiring them to first install the SDK, Visual Studio…

I was hoping to sell stuff to people who aren’t engineers.