Runtime memory leaks? How do you clear them?

Anyone had issues with cuda contexts and running memory leaks? By running I mean carrying over between executions of the app and kernels.

When I’m debugging the multi-threaded app, which is making use of CUDA, I often create a CUDA context, allocate some memory on the GPU, then break and stop the application mid running (or it crashes :]). However next time I run the app the free memory on the GPU has decreased. Which is quite annoying as it eventually runs out and I have to reboot the computer.

Is there any way to just wipe everything off the GPU and start fresh?
Couldn’t find anything in the manual or forums.

Seems like something that should exist as its a pain in the behind to have permanent memory leaks if the application crashs or doesnt finish properly.

Much thanks in advance for any and all info.

What OS is this on?

Windows XP Sp2 x64

are you running 178.08? there was a cudaMalloc bug that was fixed for that release.