Tesla c2050 maximum runtime


I am planning to buy a workstation with the new Tesla c2050 when it is released. I will probably
be buying the AMAX PSC-2n workstation:
I read in the release notes of Cuda 3.0 for linux, that individual GPU program launches are limited to a run time
of less than 5 seconds on a GPU with a display attached:
Does this apply to the Tesla c2050 cards as well or does this only hold for standard GPUs that have a
monitor attached? I 've read that the new Tesla card will have a video output which will tempt many people to
connect their monitor to their Tesla cards. If someone connects his monitor to a Tesla card will there
be such a maximum runtime? If this is the case, would the problem be solved by buying a second
cheaper graphics card and connecting the monitor to the second graphics card instead? If there is
no monitor attached to a Tesla card, is there still a maximum run-time on program launches?
There doesn’t seem to be a lot of documentation on the maximum run times of CUDA programs so
any help/advice/pointers would be greatly appreciated

Thanks a lot.

I do not have Tesla, but normal graphics cards and on Windows time is only 2 secs, after that TDR is started.
And this happens also for cards that do not have attached monitor, I am testing it now.

Since you mention Linux: there is no watchdog timer for cards that do not have an X display on them. (This is true for all present devices, and ought to be true for future devices as well.)

Cards that do not have monitors attached do not have a watchdog timer. If you run deviceQuery look at the “Run time limit on kernels” field.

There was a talk about kernel run time limit with Fermi on some other thread and I think that due to the multiple kernel support it should be able to run without triggering the watchdog timer.

You can also always get a cheaper second card, but take note that it may split your pci-e bus to 8/8 instead of 16 depending on chipset so it may limit pci-e bandwidth

Everyone was expecting this… but it’s not confirmed. The toolkit 3.0 docs say that you can run multiple kernels at once, but only from the same context.

The question is whether graphics would act like a different context or not.

We’ll know more in two weeks I guess.