Screen corruption after CUDA program execution

Hi,

I’m new in CUDA programming.
I have some simple example codes (and SDK samples), successfully compiled them and run. But sometimes after several execution screen blinks and random pixels artifacts appear. These pixels are changing while I’m moving widows - refreshing screen. I’ve attached screenshot - looks even worse: on right monitor wallpaper seems to be teared down. Pixel copy from buffer problems?.

I need to restart my laptop to bring it back to work.

My configuration:
Dell Precision M6300
CPU: Intel(R) Core™2 Duo CPU T9300 @ 2.50GHz
nVidia Quadro FX 1600M

Linux 2.6.32-24-generic Ubuntu 10.04

NVIDIA Driver Version: 270.41.19
Cuda toolkit: 4.0.17
gcc: (Ubuntu 4.4.3-4ubuntu5) 4.4.3

Does anyone has have similar issue? What could be wrong? Or simply how to restart driver without restarting system?

Best Regards

PS. I’m not sure if it’s correct forum group. If not, please move it.

Hi,

I’m new in CUDA programming.
I have some simple example codes (and SDK samples), successfully compiled them and run. But sometimes after several execution screen blinks and random pixels artifacts appear. These pixels are changing while I’m moving widows - refreshing screen. I’ve attached screenshot - looks even worse: on right monitor wallpaper seems to be teared down. Pixel copy from buffer problems?.

I need to restart my laptop to bring it back to work.

My configuration:
Dell Precision M6300
CPU: Intel(R) Core™2 Duo CPU T9300 @ 2.50GHz
nVidia Quadro FX 1600M

Linux 2.6.32-24-generic Ubuntu 10.04

NVIDIA Driver Version: 270.41.19
Cuda toolkit: 4.0.17
gcc: (Ubuntu 4.4.3-4ubuntu5) 4.4.3

Does anyone has have similar issue? What could be wrong? Or simply how to restart driver without restarting system?

Best Regards

PS. I’m not sure if it’s correct forum group. If not, please move it.

Hi,

sometimes I had a similar problem, where my mousepointer looks different (coarse pixel) or my screen get totally white.

The reason was a bug in my GPU/kernel code, especially a buffer overrun.

I hope this helps a little bit :)

bye

This use to happen with every buffer overrun w/ CUDA 0.8. Its been improving over time, there is supposed to be protected memory on the GPU (i.e. one context cannot write into another contexts’s space, or into the driver’s memory), but it seems there are still problems with that.

Run your program with cuda-memcheck (cuda-memcheck executable arguments). It can generally identify what kernel and thread is writing past the end of the buffer. If you compile with -G -g command line options, you can run with cuda-gdb and it will break on the exact line of code that is performing the write.