Win 64 Intel Core I7
GTX480 and GT9800
I have a 2 gpu system: a GTX480 for computation and a GT9800 for video.
I have code that runs perfectly on the default device (0) which in my system is the GTX480.
I’ve discovered by chance that if I explicitly set the device using cudaSetDevice(0), I get what appears to be a memory leak. I can run the code once, but if I try to run it again I get a cudaMalloc error. The problem comes and goes merely by commenting out/or not the cudaSetDevice(0) statement.
And yes, I’ve verified using cudaGetDevice(…) that I’m always using device 0.