I am trying to get familiar with opencl. To do so, I tried tro write a kernel which is supposed to calculate the sum of vector elements. I used the “Parallel reduction without shared memory bank conflicts” from the “OpenCL Programming for the CUDA Architecture” document provided by nvidia.
The local work size is 256 and the global work size is the next upper multiple of 256.
If I run my program with a vector containting 1000 elements which are all 1, the kernel return 4 (instead of 1000 of course). I assume it has something to do with the barrier.
Additionaly, the kernel is at least one order of magnitude slower than the cpu.
I would be happy for any advice on what the problem could be.
This kernel appears to work with a single work group only. How are you using it?
Also, I could not find any differences between the code of the two postings. What changed?
Regarding performance comparison with a sequential implementation, try running both with an increasing number of elements and see what happens. Up to as much as your device memory allows.
I looked through the OpenCL specs and I realised why the example given in “OpenCL Programming for the CUDA Architecture” is running on only one workgroup. The reason for this is that synchronisation is only possible between work items of the same work group.
Does using more work groups with less work items result in a speed up?
Furthermore, I did some additional benchmarks, and I found that the kernel once it is running is faster than the sequential implementation. Unfortunately, 95% of the time between enqueueing the kernel and a successfull finish, the kernel is just CL_QUEUED. What are the factor influencing the spped at which a kernel is submitted after being enqueued? This effet is also the reason why the parallel implememntation scales worse than the sequential implementation.
GPUs are built for massively parallel numerically intensive computations. Using a single work group to simply sum the elements of a vector does not belong to this category. The next step in order to improve the performance of your implementation is to launch several work groups that each sum a section of the vector and store the result to a new, smaller vector. Then run the kernel again on the smaller vector and repeat until a single element remains.
One has to be careful when searching for bottlenecks in a asynchronous system, which a OpenCL machine is. How are you measuring the time between enqueuing, and finish? Have you used any of the profiling capabilities that OpenCL have to determine how much of the time is spent running the kernel and how much is overhead elsewhere? See “Profiling Operations on Memory Objects and Kernels” in the specification.