I’ve been playing around with CUDA 2.2 for the last week and, as practice, started replacing Matlab functions (interp2, interpft) with CUDA MEX files. When I first noticed that Matlab’s FFT results were different from CUFFT, I chalked it up to the single vs. double precision issue. However, the differences seemed too great so I downloaded the latest FFTW library and did some comparisons.
Using a complex input vector of 512 records:
I took the absolute difference from Matlab’s FFT result and plotted for FFTW-DP, FFTW-SP, CUDA
I did the FFT followed by the IFFT (with appropriate scaling) and compared to the original data.
Both plots are attached to this post.
I found that CUFFT results were quite a bit “worse” than the FFTW-SP results… by a mean factor of 4-5 times across the 512 records. Note that when I input a vector with 600 complex samples, the CUFFT results were about 8 times worse than FFTW-SP (due to size not being a nice factor of 2,3,5).
I guess my question is: is this type of error expected between two single-precision methods? I am looking to implement CUFFT in an application that is sensitive to signal phase and was hoping for better.
I’d appreciate any comments that could help me better understand why this difference exists. Thanks!
Take a look at vvolkov’s improved FFT code. It was supposed to be included in the last couple of CUFFT releases, but I don’t think that it’s made it in yet. Perhaps that will have better numerical stability than the standard CUFFT release.
Also, I’m not sure how CUFFT has been written to handle non-power-of-two sizes…but if it hasn’t been done properly, it would explain why you get much less accurate results for N = 600 vs. N = 512.
I applied the same calculations to my results and got the following:
N=512 random numbers ULP (FFTW-SP)=0.80 ULP (CUFFT):1.50
N=4096 random numbers ULP (FFTW-SP)=1.0 ULP (CUFFT):2.0
N=8192 random numbers ULP (FFTW-SP)=1.0 ULP (CUFFT):2.3
N=512 my data slice ULP (FFTW-SP)=0.93 ULP (CUFFT):2.65
N=600 my data slice ULP (FFTW-SP)=1.04 ULP (CUFFT):5.03
Comparing these numbers to all the performance tables in the vvolkov thread, it seems that the errors I am seeing are in the expected range, even though they can be several times worse those from the FFTW single-precision calculations.
For interest’s sake, here are the embarrassing performance metrics of vvolkov’s FFT code for my system (NVS290 only has 2 multi-processsors). You’ll see that it crashed part way through. Note that the listed error is the maximum found in all the batches.
I used the precompiled binaries from http://www.fftw.org/install/windows.html which use SSE for single-precision and SSE2 for double precision. So yes, the FFTW library that I was using had SSE enabled. Do you expect SSE to change the accuracy of FFTW?
Well, I expect FFTW without SSE to be more accurate than with SSE.
Anyway, I just talked to the main CUFFT developer, and there were two main points (but keep in mind I am not an expert at all on things like this so I may be bungling what he said–I’m a systems guy):
he thinks that comparing error by doing IFFT(FFT(data)) is not necessarily the best way to do it–he recommends doing only an FFT and comparing the results of FFTW and CUFFT.
if there are particular sizes you are worried about for whatever reason, you are more than welcome to file a bug as a registered developer.
My first test was as you suggested: comparing FFT results from FFTW single-precision and CUFFT. I included the IFFT(FFT(X)) test because I was implementing a replacement to Matlab’s interpft (FFT, zero-pad, IFFT ).
It seems to vary from FFTW for all sizes, just worse for lengths that are not powers of small primes (as stated in the CUFFT 2.1 doc). I also tried out vvolkov’s FFT and his errors were similar to the CUFFT implementation, which leads me to think that the errors are on the expected level… I just don’t understand why they are so much worse that FFTW (relatively speaking, of course) .
Sin and cos implemented in GPU hardware have lower precision than on CPU. The description of __sinf and cosf in CUDA programing guide (http://developer.download.nvidia.com/compu…2.pdf#page=131) details out the absolute error of 2^-21.19. This is a few bits fewer than full IEEE single precision (I believe it is 2^-23, which is the largest relative gap between two subsequent floating point numbers. It’s not half of it since the standard does not require transcendental functions to be exactly rounded because of table makers’s dilemma).
CUDA also has full-precision sin and cos operations. But they are implemented in software and run much slower.