Two Questions!

Hello everyone . I have two questions
1.Is there any ways to carry out real-time computation which means I can copy the data to device and get resaults CONTINUESLY and write to harddisk , I think which is a very good way to do some signal processing work.

2.Is it possible to have only one GPU but running two or more kernels together IN THE SAME TIME?

  1. CUDA 1.1 supports asynchronous data transfers, so you can transfer data to the GPU and compute at the same time, but there is no way to write to disk without going through CPU memory.

  2. No. The current CUDA programming model only allows a single kernel to be executed at a time.

It’s just asynchronous data transfers but what’s much more important is real-time kernel execution using those asynchronous data transfered to device!

I’ve read the example. I think asynchronous data transfering means you can transfer data back just before the thread finishes its work so you can save the time for waiting for all the threads to finish. So I think what’s more improtant is dynamic real-time kernel execution with dynamic real-time data transfering.