A strange problem with parameter help

I use cuda3.0 and GTX480 to do general computing, but there is a very strange problem that I met.
I define some functions as extern “C” function in .cu file, and I want to use them in a .cpp file
I run this program under debug model, then i found the paramters appearing wrong random value, which means the value is not what i gave when i follow it into this function.
I didn’t get it. please help me. :unsure:
I planned to upload image to explain, but there seems something wrong with uploading. Sorry.

Are you compiling with [font=“Courier New”]-O0 -g[/font]?

Also try printing the parameters from the program code immediately before and after entering the function - there’s a high chance that your program actually is ok and the debugger is just outputting garbage. Seems like gdb hasn’t kept pace with gcc’s increasing optimization capabilities.

i didn’t use gcc compiler, i used vs2008 IDE

besides, i followed the value of paramters before entering the function, all values are ok. But become wrong random value. and i defined some other funtion in this way, but these functions are all ok with parameter values. I really confused. :unsure:

sorry , i mean all values become wrong random value after entering the function

By the way , i upload a image to explain my problem in chinese NVIDIA forum.
[url=“http://cuda.itpub.net/viewthread.php?tid=1326665&pid=16132219&page=1&extra=#pid16132219”]http://cuda.itpub.net/viewthread.php?tid=1...ra=#pid16132219[/url]

the paremeter “iTrnMatrixNum” and “iVctrDim” that i gave are 1 and 3072 respectively. but you can see, after entering the function, the value of parameters are totally wrong.

Ok, so it’s not gdb and gcc. But my original comment still applies - check (by changing the program source code) that the problem actually is in your code and not just with the debugger.

Also, switching all optimizations off seems to be quite essential with debuggers nowadays.

if there are some problems with my code, however, everything is ok when i run this program under EmuDebug model. but dosen’t work well under Debug, Why? :">

if there are some problems with my code, however, everything is ok when i run this program under EmuDebug model. but dosen’t work well under Debug, Why? :">

CUDA’s “Device Emulation” is a very poor emulation. Rather than emulate the device (which is a very complex problem), it instead compiles your device code using the host compiler and a wrapper to spawn host threads to run your kernel. This kind of emulation will not detect many classes of problems in your kernel, including things like:

  • Passing host pointers to the device.
  • Certain kinds of race conditions.

As a result, there are plenty of programs that work in emulation mode, but fail on a real GPU. (This is NVIDIA’s motivation for removing the Emu modes entirely in the latest CUDA.)

But in many ways to say, the parameter problem as i descrided above shouldn’t happen, that’s what i can’t understand.

Have you tried this?