Strange access to memory

Hi, I was recently introduced to CUDA and I have been leading with the same problem for weeks.
I am using CUDA to accelerate intermediate calculations in my code. I tested this code some months ago in a computer with a amd graphics and a nvidia graphic card (off-board) and everything went fine. Now I am testing it in a computer with an integrated nvidia card (quadro k4000) and some results are returned as NOT a Number (-1.#QNAN0E+000). Is there any difference? Or some configuration in the visual studio that should be different? I really don’t understand these results: sometimes the code returns valid number, I rerun the same executable file and the results are different…

Thanks in advance for any help

First thing to verify is that you are checking the return codes from CUDA functions. It is possible that the kernel is failing to launch (due to temporary lack of sufficient GPU memory, or other reasons), producing unexpected results.

I am checking the code using

if(cudaMemcpy or cudaMalloc != cudaSuccess)
printf("\n Error");

But there is no error message displayed during the code compilation and execution… There are other type of tests?

I doubt your error checking could look exactly like that. And you don’t give any indication that you are properly checking kernel calls for errors. If you want to do proper cuda error checking, study a cuda sample code like the vectorAdd sample code:

and apply a similar method to all CUDA runtime API calls and all kernel calls in your code.

As a quick test, try running your code with cuda-memcheck:

cuda-memcheck myapp.exe

Thanks! I’ll analyze that.
As I said, I am a newbie. The first time I incorporate CUDA in my code everything went fine, so I did not create any error checking methodology. I saw in some foruns that cudaSuccess was used and I tried it, but maybe I am not doing it right.