? How did you think they worked ?
Like pasoleatis said, any algorithm which is dependent on previously calculated values and can branch in multiple directions.
Also when the data set (or problem space) is small there really is no reason to use GPU, as there is a small amount of memory copy overhead to use GPU, while this does not exist generally for CPU.
example of algorithms which would be better on CPU;
some recurrences like FIB
Graph algorithms are difficult to map to GPUs, especially if they use DFS.
In general there usually is a way to get a problem to work on the GPU, even if it is serial in nature, but usually the time spent on such a solution is not worth the benefit.
When GPUs do outperform, it is when there is some section of the problem which can be fit into a parallel model. CPUs can force an ordering, and GPUs can do the ‘heavy lifting’.