I have a small test program that fails on large datasets, but works on smaller ones.
Actually I have 2 versions : one fast version, that “works” on all sizes (~takes 1.4 seconds on big case)
And a slow version, which uses only global memory. Works on smaller cases, but on the big one, after ~11s the program ends (no error message), but the output array is all zeros, looks like all threads where ‘killed’ before the end.
I am aware of a 5s time limit on Windows, but what about Linux ? Nothing about that in the FAQ.
I have installed two GTX 8800 cards and the system now rusn Fedora Core 6. I am wondering whether I can make X run on one GPU card and run my CUDA program on the second card, so that I can avoid the “watch dog”? Thank you!!!
The issue here is purely that if the GPU is being used by X, the watchdog timeout will be in force. If the GPU is not being used by X, then there shouldn’t be any watchdog timeouts.