what can CPU do during GPU is computing?

Hi, all. I have a question about real-time image processing issues of cooporation of CPU and GPU.

For example, my system is a 1core CPU with a GPU. I have a real-time image processing task. For example 100 frames of image. I need to do a image processing one by one and then shown it on the screen one by one.
Let’s assume the image processing algorithm have A->B->C->D four step. I have implemented C as a CUDA implementation and it will take about 500ms to finish C task.

Question: during the GPU is processing task C. Does the CPU have to wait for it to be completed? Even it’s a multithread implementation of the code. Because I need one thread to wait for the GPU reponse so this thread may take much of the CPU resource. Is it the situation? I hope that anyway I can spend this 500ms of idle CPU to do other task(such as another image’s processing).

Any response will be appreciated, thanks.

Or please tell me where has such material for me to read.

CUDA is asynchronous, and I would recommend that you read the CUDA programming guide to learn about the features to make best use of that (e.g. streams).

Ideally you can manage to overlap CPU based processing, host to device memory transfers, GPU computing, device to host memory transfers.

What is preventing you from also doing steps A,B,D on the GPU?


Have a look at the [font=“Courier New”]cudaDeviceScheduleBlockingSync[/font] flag in [font=“Courier New”]cudaSetDeviceFlags()[/font] - it makes CUDA give up the CPU on sync so that other threads/processes can run.