Updating desktop to be stopping while running CUDA

Good morning everyone.
I am running CUDA on Windows XP Professional SP2 (Japanese)

When I run so heavy code on CUDA, spends several secs,
I don’t see screen updates on the desktop.

eg.

  1. When TASKMGR.EXE is opened, Launching my template.exe
  2. a console window opened, but sometimes the taskbar doesn’t show a new console window.
  3. template.exe running, … but TASKMGR.EXE still doesn’t show me any information (as if it hung up), and only the mouse pointer can be moved.
  4. When template.exe finished, all windows in the desktop go active :)

in (2-3), resources on GPU are exclusive between Windows desktop and CUDA application? It is my quiestion.

Please see the CUDA release notes for comments about this issue. The problem is you are running a long computation on your primary display adapter. During the computation, the GPU can’t be used to update the display.

Therefore we recommend using one GPU for the primary display, and a G80 GPU for CUDA computation. Please see the release notes.

Mark

Mark,

I missed the relnotes, excuse me I was wrong.

After my last post, I found discussions to use another device for the primary display in this forum. I have a NV4x-based adapter and will try it tomorrow.

I hope CUDA applications would work better with single G8x(and with other platforms) in future!

Thanks for Mark and NVIDIA’s great work!

Mark, I have come here from the other direction - I was looking at the release notes and wanted to ask about this and so I have checked the forums.

Can the windows watchdog be kicked from the CUDA API ? I was under the impression that the kernel driver must do this. I would understand that the video would not update whilst the card is still in use but having a 5 second limit on the processing time seems like a barrier that can be easily overcome - but not if there is no API call available to use.

To put it simply, Is this something that is planned to be addressed in a near future driver / CUDA interface or is this something that we have to live with for a while ? And what is the recommended workaround when you want to use DX or OGL to display data after a very big calculation - interleave short idle times into the calculation every few seconds so that the 2D video driver catches up ?

I agree- essential surely!?

Any ideas when this will be possible?

Asking around, a developer of CUDA software has noted that this is only an issue if you have one kernel running hogging the GPU. As I have been told, if the problem is broken up into chunks and the intermediates are stored to global memory space then it will function no problem. I assume that freeing the process handle allows the card to accept other requests and function as a video card - which presumably gives a chance to kick the watchdog. This being the case and assuming that the global memory can be recovered undamaged, then the problem, as far as I am concerned, is a non issue. It would be much appreciated if anyone can confirm that this is correct.