Problem with kernel - "Variable is not live at this point"

I’m having a problem with my kernel not working properly.

Using cuda-gdb, I’ve narrowed the problem to these lines of code.

double G_5,G_6,G_7,G_8,G_9,G_10,G_11,G_12;

G_5 = ( bl_a[0]/bl_a[3] - lep_a[0]/lep_a[3] );

G_6 = ( bl_a[1]/bl_a[3] - lep_a[1]/lep_a[3] );

G_7 = ( bl_a[2]/bl_a[3] - lep_a[2]/lep_a[3] );

G_8 = ( G_1/lep_a[3] - G_2/bl_a[3] )/2.;

The problem is that I set the variable G_5 and in the next line I print its value with cuda-gdb. The output is “warning: Variable is not live at this point. Returning garbage value.”. I know the the values from the arrays bl_a and lep_a are set properly. Also, G_5 is used some lines later… It seems to be a nvcc optimization, maybe it concludes that G_5 is not being used, which is not true… Any suggestions?

P.S.: This problem happens with some other variables, such as G_7 too. Using CUDA 4.0 with a card with cuda capability of 2.0 (Tesla C2050)

It could be just that printf inside kernel with cuda-gdb is not supported.

Cuda-gdb should be able to access the value of any automatic variable between its definition and its last use while stopped on any source line (if single-stepping at the assembly level using nexti then it is possible to lose coverage for a few instructions). Can you confirm that it is the case? The code snipped does include any use of G_5 (although your comment mentions it is actually used sometimes later).

I recommend using a later version of the CUDA toolkit as many bugs related to live ranges have been fixed since the 4.0 version. If not possible, massaging the source code to reduce the number of automatic variables live at any point in time will reduce the chance of “losing” live ranges. Adding as many dummy uses (copying to a volatile variable for instance) will also increase your changes of G_5 to become visible.