Releasing the video card?

After getting Labview and CUDA working together, another problem has popped up. Here is a brief descriptor of my program:

  1. Acquire data
  2. Do some minor Calculations on the CPU
  3. Allocate Memory on the Video card
  4. Copy data to the GPU
  5. Signal Process on the Video Card
  6. Copy data off the GPU
  7. Display data

After running the DLL inside labview, I get all the way to step 7, but there is a timing issue. Basically, the time it takes to do steps 1 - 7 can easily be done in under 30ms. However, when the Labview program is run, it freezes, and then produces and error after 5 seconds. It is always 5 seconds. This seems to conveniently overlap with the Watchdog timer that other people have had issues with. The question then is, “Is there a command or a way for the DLL to gracefully exit after the 30ms calculations have been completed instead of timing out?”

Strange interaction, the card should be available if it is not used.

Try to call cudaThreadExit() after stage 6.

If that does not work, you can use the low-level API and detach the context.