cuFFT synchronizing

Hello,

I would like to use cuFFT as part of a series of kernel calls from the HOST for a single GPU DEVICE. My question is - do I need to synchronize these kernel(s)? The outline of what I would like to do is defined as following pseudo-code:

...
__global__ void kern1(cuComplex *out) {
  // do stuff and store result in out array
}

__global__ void kern2(cuComplex *out) {
  // do stuff and store result in out array
}
...
int main(){
  ...
  kern1<<<NumBlocks,ThreadsPerBlock>>>(out);

  // synchronize here ??
  cudaDeviceSynchronize();

  // call cuFFT FFT on output array out
  cufft FFT call(out);

  // synchronize here ???
  cudaDeviceSynchronize();

  kern2<<<NumBlocks,ThreadsPerBlock>>>(out);

  // synchronize here ???
  cudaDeviceSynchronize();
  ...
}

I realize the above psuedo-code is far from complete, but I am just wondering if it is necessary, when using a single GPU DEVICE, to call cudaDeviceSynchronize() between kernel(s) as listed above.

Thank you in advance for any advice/hints?

If you are launching everything into the same stream, then stream semantics apply. This is true when that stream is the default stream or any other created stream.

That means that operation B, issued into a stream after operation A, will not begin until A is complete.

Therefore, in your example the cufft call will not begin (insofar as the GPU activity is concerned) until kern1 is complete. Likewise, kern2 will not begin until the GPU activity associated with the cufft call is complete.

Explicit synchronization between items issued into the same stream is not necessary.

(The exception to this could be if managed memory is used (or cudaMemcpyAsync), and you are talking about processing results data on the host (in host code) from a managed allocation. In that case, an explicit device synchronize may be needed before your host code can assume that the results have been computed. But that is not really a stream operation anyway. The host code is not issued into a stream.)

Thank you for the information Robert, I appreciate it