Would it be possible to update the provided example to show how to build a shared library that links to the shared version of the CUDA run-time (i.e. libcudart.so) ? I tried changing the example to use a shared particles library but received the same error as Alexander (below).
I'm afraid I'm still getting the same errors. Building Shared CUDA libraries seems to be pretty broken or at the very least it's hard. The author Roberd Maynard is active in CMake development, so if there is a test case that should clearly work (such as this), you might consider creating an issue on the CMake bugtracker: https://gitlab.kitware.com/...
Thanks for reporting the issue. I have submitted a fix for this issue to CMake ( https://gitlab.kitware.com/... )
Currently the way to specify a specific version of CUDA is the same as other languages like C++. This means you can either have the desired version on your PATH, or by setting the CUDACXX environment variable ( https://cmake.org/cmake/hel... )
The exception for the above is MSVC which is controlled by the toolset settings. Specifically you would want something like:
cmake -G "Visual Studio 14" -T host=x64,cuda=9.0
For more on MSVC toolsets: https://cmake.org/cmake/hel...
So I am going to answer your question in two parts.
1. You can specify globally that you want to use the shared CUDA run-time on the initial configuration of a project by using the following:
cmake <path_to_source> -DCMAKE_CUDA_FLAGS:STRING="--cudart shared"
2. The CUDA device linker doesn't support resolving symbols that reside in dynamic/shared libraries, you can find this documented at ( http://docs.nvidia.com/cuda... ). For this reason CMake automatically drops any dynamic libraries from the device link line
Going to post my answer about this again so that everyone sees it :)
So when you convert the particles library to be shared it is unable to have any of its device symbols be called by any other library / executable. This is why you see the test executable fail to link, as those device symbols are not visible to it.
The reason for this is that the CUDA device linker doesn't support resolving symbols that reside in dynamic/shared libraries, you can find this documented at ( http://docs.nvidia.com/cuda... ). For this reason CMake automatically drops any dynamic libraries from the device link line
Interestingly, I specifically recall being able to somehow dynamically link our Project when we still used FindCUDA.cmake, but that might have been without Separable Compilation. I will go back in time and see if I can make the link work there.
Either way, thanks for this valuable bit of information, it's very appreciated! I've been trying to make dynamic device linking work for a few days now. Is there any preferred Place where I can contact you for further questoins? Maybe StackOverflow with the right tags? The CMake mailing list? I would rather not spam the CMake Bugtracker with support requests.
The CMake mailing list would be the best place to start. Once you have tracked down issues with CUDA support in CMake, please do open up issues on the gitlab issue tracker.
OK. Thanks for the information.
Thanks for the information Robert. Looks like I'll stick with the static runtime library for now.
How do i set cmake -DCMAKE_CUDA_FLAGS=”-arch=sm_30” directly from Cmake?
set(CMAKE_CUDA_FLAGS "${CMAKE_CUDA_FLAGS} -arch=sm_30") will append -arch=sm_30 to the CMake CUDA flags.
Thank you
Hello Robert. I have a problem with respect to building VS 2017 with CMAKE 3.10.1 for cuda 9.1. I am getting "Compiling the CUDA compiler identification source file CMakeCUDACompilerId.cu failed" error from CMake GUI. The same is working for a setup with VS 2015 with CMAKE 3.10.1 for cuda 9.1. I was not able to get a concrete answer to this problem anywhere. Is there anything you are aware of which could help me out hack this issue?
How is the host compiler set? I tried changing it with CUDA_HOST_COMPILER, but this doesn't work (for compatibility reason, I'm still using an old SDK that doesn't support modern C++ compilers).
The variable name should be CMAKE_CUDA_HOST_COMPILER or you can use the environment variable CUDAHOSTCXX. If the build directory for the project has successfully configured you will need to delete it and re-configure with the this variable specified on the initial configure.
Makes sense, thanks!
I hope the custom compiler flags got better... Some CXX flags may not work with the CUDA host compiler.
CXX flags are not propagated down at all with the language bindings. Projects need to do that themselves and make sure to wrap them in Xcompiler if needed. This isn't that bad as the most common flags (language level/fpic) can be set using CMake concepts instead of flags directly. Additionally CUDA can be used a language flag in generator expressions allowing for better control over compile flags.
Well, I had some issues with clang as the C++ compiler and GCC the host compiler: clang would be called for the dependency computation instead of gcc, with -dumpspecs where it actually failed, and all the clang flags were passed to nvcc, std=c++17 as well (it still does with brand new checkouts, for a strange reason, the second time, the dependency file must have been generated).
Still need to check if native CUDA behaves better.
native CUDA should behave better, especially CMake 3.10+ which have a fixed a couple of bugs when using different host compilers and C++ compilers.