I recently installed cuda2.3 along with cuda-gdb. My projects combine cuda code along with regular c code. Projects are built using cmake (and FindCUDA.cmake). Does anyone know what cmake build options are used in order to compile the necessary debugging information for cuda-gdb. Setting CUDA_NVCC_FLAGS to “-g;-G” generates an error message:
nvcc fatal : Option ‘-G’ requires nvcc compilation of .cu files to object files
I have cmake version 2.6.2. I’m not sure which version of FindCUDA.cmake is on my system. Any assistance is my appreciated.
You don’t have to add -g -G directly to the cmake file, that is what CUDA_NVCC_FLAGS are for.
To solve the nvcc fatal : “Option ‘-G’ requires nvcc compilation of .cu files to object files” error, upgrade to the newest FindCUDA.cmake, which is now a part of CMake 2.8.
However, last I checked you will still get an error at the linking stage because CMake is using gcc to link, and to correctly link device debuggable code you need to use nvcc with the -g -G options again. I did it by hand last time I tried out device debugging. If the FindCUDA.cmake dev doesn’t notice this post, I’ll shoot him an email tomorrow. Having FindCUDA.cmake handle all the steps for building device debugging into apps is a must have going forward with tools like NEXUS coming out!
I agree, this is a dirty hack to make it debug on cuda-gdb. Hope FindCUDA can be fixed to handle all these issues.
Also, I was wondering if there’s anyone interesting in implementing a eclipse plugin specifically designed for CUDA development (just like NEXUS)? The feature that I want most is correct CUDA/PTX syntax highlighting, memory inspector, and code completion…(okay too greedy…)
According the the cuda-gdb manual, the -g -G flags are not compatible with the -cubin flag. I think FindCuda.cmake generates .cubin files to determine register, shared memory, and local memory usage. Perhaps if you turn off the option to do that, it will work. This is just speculation. In the CMakeCache.txt file, try setting
I’m still getting the
nvcc fatal : Option ‘-G’ requires nvcc compilation of .cu files to object files
error even after updating to cmake 2.8rc-7 and its integrated FinCuda.CMAKE when defining CUDA_NVCC_FLAGS with the ‘-g -G’ options. I manually set the cubin generation options just in case, but AFAIK they are off by default.
I’ve also the same situations. I’ve installed CUDA in version 2.3 and cmake 2.8 and when I try to debug my code with options ‘-g -G’ I just got above error.
Sample of cmake output:
But when I add -deviceemu option to these everything goes well.
I too have encountered difficulties with CMake and CUDA via the FindCUDA module, but never with the SDK.
I would also like to request to the FindCUDA.cmake developers that they use the same compilation flags as in the current SDK. Given the rapid releases of the updated runtime system, it is pretty important to have the FindCUDA.cmake reflect the current SDK. Also clearer documentation about how to use the compile-time variables provided in the FindCUDA.cmake file would be most helpful.
I checked cmake command, when I used -g and -G options. And Cmake add also -M option, when I deleted that last option (-M) from the compile command, then compilation went well.
To solve this problem I need to remove -M option from Cmake, but I don’t know how to do it ; /
Hi, I’m not sure why my filter didn’t pick up this thread until today. I’m the FindCUDA.cmake developer. Now to address some of the concerns in this thread.
The -M option is there to compute the source level dependencies (i.e. header file includes) just like ‘gcc -M’ does. This is an integral part of what FindCUDA does. If -M doesn’t work with -g -G, then I’ll have to file a bug with the CUDA folks. The SDK doesn’t make use of -M as far as I know, but that doesn’t mean it’s a feature no one should use. It would be helpful to post a reproducer, but I’ll try next week to reproduce the issues you are seeing with -g and -G.
I’ll have to look into the issue of using nvcc for linking. There hasn’t been a need for that yet, but now that there is I’ll look into it.
Generation of the CUBIN file for stats collection should be disabled by default, but only with a clean build tree. If it is on, you will need to change the CUDA_BUILD_CUBIN option to OFF (I see one of you have done this).
‘make VERBOSE=1’ will generate all the commands and tell you what FindCUDA is doing if you want to see the full output of the build system.
As far as a CUDA C Eclipse plugin, I don’t know of any plans one way or the other.
@dpephd is the documentation for FindCUDA.cmake in CMake 2.8 not descriptive enough? What pieces or variables need augmentation, and in what way?
So, basing on Your (JBigler) post, I think that computing source level dependencies doesn’t need any of -g or -G options. I tested compilation to *.o file, and everything went well with -g and -G options. But the whole process is interrupted on previous step (with -M option). If it would be a way to set some variable, to add custom options only to the compilation to the *.o file step, I would be able to workaround my problem.
I looked into FindCUDA.cmake file, but it is so complicated… to add modification, especially for me who is new to cmake.
I add some output to make it more easier for You to reproduce this situation:
I’m investigating the issue with -M, but for how here is a quick and dirty solution to your problem.
In /share/cmake-2.8/Modules/FindCUDA/run_nvcc.cmake you need to add an additional variable to add your special flags. Look for the cuda_execute_process() around line 208.
The @CUDA_COMPILE_TIME_EXTRA_FLAGS@ will be expanded to whatever value it has when you call cuda_add_library or cuda_add_executable. Don’t forget to unset it if you don’t want it anymore.
include_directories(../src)
set(SRC ../src)
set (CPP_SOURCES
.... # some cpp files;)
)
set (CUDA_SOURCES
${SRC}/CudaTeller.cu
)
if (CUDA_FOUND)
set(CUDA_COMPILE_TIME_EXTRA_FLAGS -g -G)
cuda_add_executable(balls
${CPP_SOURCES}
${CUDA_SOURCES}
)
set(CUDA_COMPILE_TIME_EXTRA_FLAGS)
endif()
I did a little investigation into the -G -M flag issue. It looks like CUDA 2.3 chokes on -G -M for Visual Studio and Linux GCC, while the current beta of CUDA 3.0 handles it just fine.
I’m going to add some code to work around this issue, though it still complains in VS about the assembler not being present (oh, well).
Thank you for your reply. I am trying to work with a code which is not running properly on my system and became frustrated after not being able to compile and
run with the cuda-gdb debugger as noted above. After messing around with CMake 2.6 and downloading FindCUDA from the development website and getting frustrated with it,
I downloaded and installed CMake 2.8 on my system (CentOS 5.4, 64-bit) along with the CUDA 3.0 Beta and can confirm as noted by jjtapiav that compiling with the -G
flag now works, i.e. completes successfully.
I am still having problems with CMake and the CUDA runtime system in general since I am trying to play around with various optimization levels to try and better figure
However setting the various CUDA_NVCC_{RELEASE,DEBUG,MINSIZREL,RELWITHDEBINFO} strings communicates these flags to nvcc within the CMake build framework, but not to the
various parts of the nvcc compilation trajectory. In particular, optimization flags (-Ox) are not communicated to the nvopencc and ptxas trajectory components when compiling in Release mode.
With regards to documentation, I think that while the FindCUDA.cmake file does place some explanatory comments in the generated CMakeCache.txt file, how the flags are and/or are not communicated
throughout the CUDA compilation trajectory would be helpful. Additional CMake strings may also be helpful in communicating flags to the various parts of the CUDA compilation trajectory. Also it would be
helpful if any CMake CUDA compilation documentation would cross-reference the “The CUDA Compiler Driver NVCC” document supplied with the SDK. The usefulness of the “The CUDA Compiler Drvier” document
could also improved by having a more detailed description of how the various optimization levels affect assembly code generation.
Also, a developer may wish to use a release target for .cpp files and a debug target for CUDA device code. This does not appear to be possible currently with the FindCUDA.cmake implementation.
The CUDA compilation trajectory is a complex one. More documentation about it and how to use it effectively would assist developers in my opinion.
Thanks,
dpe
PS My development system is listed within on profile page.
I can’t comment on why the flags aren’t getting propagated to the various compilation stages. If the flag is getting passed to nvcc, then that’s as fas as the FindCUDA.cmake script can take you. I can’t modify how nvcc behaves.
You might try adding the ‘-v’ argument and see all the individual phases of compilation. If your nvcc flags aren’t getting propagated to the right component, try looking at the documentation of nvcc to see what flags you might specify to get the desired behavior.
Providing a link or reference to the CUDA nvcc docs is a good idea. I’ll go ahead and add this. I’ll try and pass this feedback related to the CUDA Compiler Driver document on to the CUDA team. Also, the documentation for FindCUDA is in the FindCUDA.cmake file not in the CMakeCache.txt file. Those doc strings in there are merely reminders of what the variable should be. See cmake --help-module FindCUDA for documentation on FindCUDA. Keep in mind that FindCUDA isn’t going to document nvcc much. NVCC has its own documentation. This script is just designed to help you call nvcc as part of your build system.
As far as specifying specific flags to various stages in the compilation, I’ve toyed with the idea of adding special CMake flags for this, but I haven’t found a compelling use case. You can simply add the flags to the CUDA_NVCC_FLAGS if you want that behavior.
Try setting CUDA_PROPAGATE_HOST_FLAGS to OFF. This will not propagate any of the host compiler flags from CMAKE_CXX_FLAGS to nvcc via -Xcompiler. It seems to me you could then set the host compiler flags to whatever you want with the CUDA_NVCC_FLAGS or with the OPTIONS flags to CUDA_ADD_LIBRARY (see the FindCUDA docs).
Indeed it is. It might be useful to start a new thread with your concerns about the CUDA compilation trajectory documentation, so that your comments don’t get lost in this thread.
Thanks for the reply and suggestions. I was ultimately able to resolve my runtime failure problem by explicitly passing optimization flags to the various CUDA compilation trajectory components … see my post here.
Per your suggestion, I have posted a follow up query [topic=“154876”]“What do optimization levels do?”[/topic] to try and get a better understanding of my problem and whether pursuing additional work to get higher optimizations levels (-O2 and greater) to work is worth it.