"time out" in cuda program mechanism of "time out"

Hi every body around here.
does any body pay attention to the “time out” mechanism.
In release not says that “Individual GPU program launches are limited to a run time
of less than 5 seconds on a GPU with a display attached.”

so i have a program to test it. my program is too simple. just copy source array to destination array.
In this case i use only 1 thread, 1 block and increase number of element in array.
My kernel function run over than 5 seconds (8.43seconds) but “time out didn’t occur”
so i increase the number of element to get “time out” occurs.
My kernel function processes time increase from 8seconds to 14 seconds. and some time “time out” occurs.
This result makes me don’t understand “time out” mechanism.
if any body knows this “time out” mechanism. please help me.
thank you very much.
:)

My test condition:
windows XP sp2
CUDA 2.0
VC++ 2005
geforce 8800GT

5 seconds isn’t a hard limit, just a guideline that if you want to be sure a timeout doesn’t occur, you should stay below that. On some systems you can have kernels running 10 or more seconds without the watchdog entering and purging the GPU program.

Thank for your reply Big_Mac.

you mean that

NVIDIA always guarantees that the “Time out” will never occurs if your “kernel function running time” is smaller than 5 seconds?

Thank! :)

No. The timeout in windows Vista is actually 2 seconds. See, the timeout is something implemented by the operating system/window manager, not NVIDIA. In linux without X running, there is no timeout even on the display adapter.

Thank you MisterAnderson42.

Yes, I read “CUDA_Release_Notes_2.0” for several times.

But in this situation, I use windows XP 32_bit.

Thank you.

By the way.
except the kernel’s processing time is too slow.
What is another things influence in program to make the “time out” occurs.
Thank! :)

Does that mean the kernel function must run less than a specific time. How to deal with this situation that the programme just a simple kernel but it will take a long time to execute.

Simple kernels are simple to split in two. Just do a little at a time.

You asked if NVIDIA would guarantee that timeouts would never occur if kernel run times are under 5s. I simply provided a counter example. Someone may want to run your app on Vista someday, no?

I got timeout under linux after about 100 sec or so.

I thought the running time of a kernels isn’t limited under linux-family OS.

Am I wrong?

The watchdog timer shouldn’t exist on linux IF:

  1. X is off

  2. You’re using 178.28 drivers or newer (AFAIK)

There might be other time out mechanisms that I don’t know of.

My test condition:
windows XP sp2
CUDA 2.0 Beta
VC++ 2005
geforce 8800GT
My program runs 500times without 'time out", average time is 8.261(seconds).
If my kernel runs more than 8.3(seconds), some time “time out” occurs.

Current version is 2.1 Beta

And In Linux (I used Opensuse 10.2)

I never get timeout occurs, My TestKernelFunction() is runs over than 300seconds without timeout occur.

I don’t have enough patience to test more.