Assertion failure in cuda-gdb printing a variable gives 'cuda-gdb internal error'

Hi,

I am debugging a program and I am getting errors from cuda-gdb.
The system is a dual c1060 running Fedora 10 and CUDA 2.3.
I have reduced the program to the smallest possible that still gives the error.

a debug session is like this

(cuda-gdb) b 37
Breakpoint 1 at 0x417d2e: file GDBug.cu, line 37.
(cuda-gdb) r
Starting program: /home/cuda/Mappa/GDBug
[Thread debugging using libthread_db enabled]
[New process 6013]
[New Thread 139998076090112 (LWP 6013)]
Warning: a GPU was made unavailable to the application due to debugging
constraints. This may change the application behaviour!
[Switching to Thread 139998076090112 (LWP 6013)]
[Current CUDA Thread <<<(0,0),(0,0,0)>>>]

Breakpoint 1, itera () at GDBug.cu:38
38 ranmar(m, SEED, x, NM);
Current language: auto; currently c++
(cuda-gdb) p m
Assertion failure at /home/buildmeister/build/sw/rel/gpu_drv/r190/r190_00/drivers/gpgpu/cuda/src/debugger/cudbgtarget.c, line 2278: cuda-gdb internal error
Aborted

Probably I am doing something wrong but cuda-gdb shouldn’t do that.

Thanks,

G

P.S.
The Upload keeps failing so I cannot attach the file I’ll paste it here…

#include <stdlib.h>
#include <cuda.h>
#include “cutil_inline.h”

#define NM 200
#define NT 512
#define NB 100
#define SEED 12345

global void itera();

int main(int argc,char **argv)
{
cudaSetDevice(0);
// Kernel invocation
itera<<<NB, NT>>>();
cutilSafeCall(cudaThreadSynchronize());
}

device void ranmar(int ij, int kl, double *rvec, int len);
//device double ta;

global void itera()
{
double x[NM];
int nt = blockDim.x; // How many Threads in Block
int i = threadIdx.x; // My thread
int b = blockIdx.x; // My block
int m=0;

// my index
m=b*nt+i;

ranmar(m, SEED, x, NM);

}

device void ranmar(int ij, int kl, double *rvec, int len)
{
int ivec;
double uni;
float u[98], cm;
int i97, j97;

cm = 16777213.0 / 16777216.0;

i97 = 97;
j97 = 33;

for (ivec=0; ivec<len; ivec++)
{ uni = u[i97] - u[j97];
rvec[ivec] = uni - cm;
}
}

Hi,

Thank for submitting the bug. The next version of the CUDA tools, whose beta version will be available next month, fixes the issue.

Alban.