Can CPU and GPU work at the same time?

Can CPU and GPU work at the same time? If not, why? I think potentially they can work at the same time and if yes, how to do that?

Thanks ! :yes:

Yes they can, it is described in the programming guide.

Thanks,

You mean the section “Asynchronous Concurrent Execution” ?

Will that work in cuda 1.0 ?

The 1.0 api contains the cudaThreadSynchronize/cuCtxSynchronize, so until you put such a barrier to wait for kernel completion, it should work well…

Unfortunately, and i guess there is no good reason for that, we do not have a non blocking completion test in 1.0 … am i missing something there ?

Cheers,

Cédric

It’s a really interesting question!!!
The answer is undoubtedly YES!

But that’s only the start of the story.

The first part is you already have the same algorithm coded for the CPU (probably multi-threaded), and if you didn’t have yet, you will make a CPU mock-up to check the algorithm itself :-)

The second part is that on the many many current Windows/Linux/Mac configuration that may support CUDA, you will have low-end to mid-end graphic card (say 8400/8500/8600-level GPU), that will no provide a great improvement depending on the algorithm used (global memory accesses, limits on registers, and so on will limit the GPU).
For example I own a Core 2 Duo 2.4Ghz with a 8600M GT that may show from 10X to 1X performance ratio with the CPU depending on the example I throw at it.

So the better is to KEEP your CPU developped part, and run it in parallel with your CUDA part, and even with low-end cards, you will improve greatly overall performances ;-)

(I am into a personal CUDA project that involves both doing a check on CPU perf and GPU perf on an algorithm and then allocate tasks to both depending on their respective performances)