Disabling the "run time limit" How do I disable the so-called, "KERNEL_EXEC_TIMEOUT"

Hello all (and Season’s Greetings),

I recently had to get a new computer, which has a GeForce GT 525M in it, so now I’m actually able to program with CUDA 3.2. Yay !! External Image

But it’s not all good news. I’ve written a program that calls cuDeviceGetAttribute() (from the CUDA Driver API), and discovered that something called a “run time limit”, otherwise known as a “KERNEL_EXEC_TIMEOUT”, is currently imposed on the chip (a screenshot of the program’s output is attached)…

That’s not good news for what I want to do.

So what is it, why is it there, what’s the actual timeout period, can I change same, and, oh yeah, HOW DO I DISABLE IT ??

I do realize that this question has probably been asked before, probably ad nauseum for some of you, but the search engine for this site doesn’t seem to work too well (or at all), and the CUDA documentation doesn’t seem to have any more to say about the matter than that this ‘timeout’ exists.

If it matters, my CPU is a 2.4 GHz “Intel(R) Core™ i5-2430M” running Windows 7 Professional. The GeForce GT 525M has 2 Multiprocessors with 96 CUDA Cores, and a “peak clock frequency” of 2.1 GHz.

To complicate matters, I believe it’s also running what’s called, “NVIDIA Optimus Technology”, which means that the NVidia chip has some kind of intimate, PCI-bus-level relationship with an on-board Intel graphics chip, the idea being that if and when the on-board Intel graphics chip encounters more than it can handle, it can ‘switch to’ the NVidia GPU (or offload specific tasks, I’m not sure which)…

Can anyone offer any suggestions or recommendations as to where I might find more information about this?

Thanks in advance.

Sorry, my bad - didn’t look hard enough. Found the correct thread right in this Forum: The Official NVIDIA Forums | NVIDIA

Will post my comments there…