Hey all,
I’ve been experimenting with accelerating the computation of eigenvalues in a 25x25 matrix. To do this, I’m using Jacket, a MATLAB add-on that talks to CUDA, and a Quadro FX 5800 GPU.
The short version is, Jacket takes longer to find eigenvalues for smaller matrices; I assume it’s because the processing time saved is overwhelmed by the overhead of creating a 25x25 matrix on GPU memory, passing data back and forth, etc. On larger matrices (above 800x800 in this case), Jacket pulls ahead of MATLAB. This is good–I can see how and why CUDA really starts flexing its muscle–but again, I’m strictly interested in 25x25 matrices.
You can see the paper I attached–check it out if you want (a lot) more detail.
Here’s the MATLAB code I used:
clear all
clc
x = 25; % matrix dimension
n = 10000; % # iterations\
for (i=1:n)
tic;
A = rand(x,x);
B = eig(A);
C(i) = toc;
end
(For both code snippets I put “tic” too early, but as the paper shows, it doesn’t make much difference).
And here’s the Jacket code I used. I re-coded the MATLAB code according to the instructions on the Jacket wiki:
clear all
clc
x = 25; % matrix dimension
n = 10000; % # iterations\
for (i=1:n)
tic;
gsync;
A = grand(x,x);
B = eig(A);
C(i) = toc;
end
For the code posted above, we attempted to “unroll the loop,” in which we repeated “A = grand(x,x)” and “B = eig(A)” ten times per loop. This, too, was sequential, so it didn’t help in any significant way.
Bottom line: is there anything I should be doing differently to get that 25x25 matrix’s eigenvalues to compute faster? Or am I always going to hit this wall?
Thanks in advance for any replies.
Results Upload.pdf (398 KB)