Performance increase? Too good to be true?

I have a fairly complex cellular automata application which uses a 154x154 grid.

In standard C++, 500 iterations take 258.22 seconds to complete.

Using CUDA, these 500 iterations only take 17.802 seconds!

On each iteration, a calculation takes place involving locating circular neighbours within a certain radius of each grid cell. So I suppose this application is very suited to GPU programming!

Have any other users founds similar speed ups?! I’m double checking my code as this seems an unbelievable increase :thumbup:

Thom

a speedup of ~14x is nice, but also quite usual. with some problems you can even reach a speedup of more than 100x. :-)
you can take a look at nvidia.com/cuda, where they write a speedup compared to cpu for nearly all featured projects.

nevertheless, great work and keep optimizing. ;-)