The doc doesn’t say much about cuFFT plans in terms of how long they take to create, and how much CPU and GPU memory they take up. I suspect it’s quite a lot (I was leaking them for a while and it didn’t take many before I ran out.) Maybe more than just tables of twiddle factors…
Should I be caching them rather than creating them new each convolution? If I cache them, the memory stays around even if I might not do another convolve for a long time. If I don’t, I pay the price in time to rebuild it.
I’m doing 2d complex<->complex ~2k x 2k transforms mostly, if that makes a difference.
Any guidance from folks here?