Fermi or future GPU remove need for 2 sec kernel limit? ...by allowing better multitasking?

Windows has this for good reason - I think to prevent the screen from locking up (as of course the GPU controls the screen’s pixels). At least I’m fairly sure that’s the reason.

However, if the GPU used multitasking, then a few applications could run at once. This means a kernel could last forever, even while the screen is continuously being updated.

I’m not sure Fermi can support this, as concurrent kernel execution only seems possible in a given application, not across multiple applications. Hopefully, someone can prove that wrong?

If however, it is right, then is there any plan for true multitasking with Kepler, Maxwell or later? It would be much easier to run programs without having to split them up into 2 second chunks.

I’m certain it has to be on the roadmap somewhere. The lack of full preemption not only makes the watchdog necessary, but it also makes hardware debugging with a single device much harder. There was a report from GTC 2010 that Kepler was focused on “preemption, virtual memory, and reduced CPU dependence”:

Funny, I was just reading that thread (obviously after I made my initial post), and found the comment about the watchdog thing.

I don’t know about anybody else, but multitasking on the GPU is really exciting - it’ll open up a whole new world and make multiple programs that utilize the GPU run smoothly and seamlessly with each other. It’ll also make programming time-intensive kernels a LOT easier, and avoid stuttering of the screen’s basic refresh. If Kepler really supports this, I’ll buy one immediately.

A (pre-emptive External Image ) thank you to NVidia for making this possible!

You can disable that timer. Also it is not so much problem often. Cuda programs have usualy many blocks, so it is easy to split.

Even with the timer disabled, at least on my setup, the screen freezes during long kernel calls. I only have a 9500 GT, but AFAIK, Fermi would do the same. Even during fairly quick kernels (say 200ms, and yes I know that’s a lifetime in GPU terms for some apps, but not others), the screen will stutter accordingly, i.e. every 200ms - 5fps. Weirdly enough, the mouse cursor is still silky smooth … odd … it’s almost as if the graphics card makes an exception just for that.

Also, end users will not necessarily want developers to start changing this kind of registry setting for their software.

These guys have a nice cuda multitasking technique:
http://www.nvidia.com/content/GTC/posters/2010/A06-Task%20Management-for-Irregular-Workloads-on-the-GPU.pdf

The watchdog timer is controlled by the OS’s display manager. If you want to disable the watchdog, disconnect your GPU from the display and turn off X11 (for linux)

Just because there is a workaround doesn’t mean NVIDIA shouldn’t eliminate the problem entirely in a future hardware revision. :)

I prefer some other solution, I do not see too many consumer cuda programs and multitasking is looking strange for me. I think they have other much more important things to work becides multitasking. Actually I personaly a bit disagree with all cuda development line, though I have not much information, anyway they are free to add mutlitasking and virtual memory and what ever they want.