on host side i alloc a large memory (approx. 86 Mb).
Then i let a kernel work on the memory and i’m quite sure i’m not going over the boundaries of 86 Mb.
Depending on the kernel, i get a strange screen flicker that worries me.
Also, there are heaps of pixel values changed (white ones and black ones) after running the kernel.
I check for the return value cudaSuccess on every operation, so i don’t think there is an obvious error i ignore.
I wonder now if anybody got similar experiences and can give me a hint what the problem could be?
Or else, what is the best idea to track down the suspected failure?
There is supposed to be memory protection on the card to prevent this, but driver bugs in the past have sometimes led to weird screen corruption when kernels are aborted. Failing hardware has also been known to cause screen corruption in some cases.
If you are worried about buffer overruns, you can run your CUDA program with cuda-memcheck. (I finally used this program last week! Great stuff.)