I must admit that I am pretty novice when it comes to streams, but I got them working and found some interesting results when testing how they performed compared to one big launch and several smaller launches with the same parameters as the streams.
I based my work on the simpleStreams example in the SDK, but I wanted to extend it a bit to use a different kernel, but with the same kind of access pattern. I also wanted to add a step of cudaMemcpy H2D.
My streams are setup like:
for(i=0; i < launches; i++)
…cudaMemcpyAsync(g_in + i*memory_size … H2D)
and one exactly the same as the first, just not async.
The results show no difference in approach 1 and 3, but both are a lot quicker than approach 2. I have benchmarked by using cudaEventElapsedTime, gettimeofday and the time command in linux. Is there something I am missing on how to use streams, or are the memory copies overlapped by the operating system in a similar fashion giving good performance?
The approaches are based on that there are differnet ways to process a set of data. Either do everything in one launch, without the async API, this means more blocks launched. Or it can be split up into different number of launches using the async API or not.