Eigenvector

Has anyone developed an algorithm to calculate the eigenvectors for large matrices using CUDA? OR does anyone have any resources available that might help me in my quest? Obviously the eigenvalue program is helpful. The biggest problem is that the math is a bit beyond me and since you have to use complicated numerical approaches it is a bit more difficult to program than something for very small matrices.

In fact, any resources on good eigenvector numerical methods would be helpful.

Thanks

Joe

http://mathworld.wolfram.com/Eigenvector.html

Don’t know how I didn’t find this in my searching :P I think this should help me. One question, if you are familiar at all with this or related algorithms. The determinant of a diagonal matrix is just the product of the elements in the diagonal. So the question becomes, is there an easy way of calculating the determinant of a tridiagonal matrix since that is what I’ll be working with?

Edit: Nevermind, I think I might have found the answer to my question.

for math kind of questions I always first check wolfram ;)

So it seems that finding the determinant is a recursive algorithm…which is a problem. /Sigh. There has to be an easier way(and by has to, I mean probably isn’t, but would be amazing if there was).

Thanks for the help.

you can do a while() { } in your code, can you not? There are a lot of recursive algorithms that can be rewritten into a while loop.

Yeah, I probably could find a way to speed it up. First however, I must work to fully understand the math behind it :-\ if you lookup tridiagonal matrix on wikipedia they have an entry describing how to find the determinant of said matrix. The notation is a b it confusing to me, as they don’t tell you what ann represents, if its just A at position n, n or if it is something different.

The problem with calculating the determanent recursively is not a matter of programming, but patience. IIRC, the recursive determinant calculation is O(N!) (factorial!).

Check a library for a copy of Numerical Recipes. I’d bet it has an efficient eigenvector routine. Of course, it’ll be for a single thread, so you’ll have to come up with a data parallel approach yourself.

Most methods I’m familiar with (as a physics PhD) involve transforming the matrix until it is diagonal at which point the problem becomes trivial. Lanczos (sp) is one method that works well up to ~10 million by ~10 million sparse matrices. I’m sure there are simpler techniques that work well for smaller matrices.

It seems that computing a tridiagonal matrix is significantly less complicated, but not nearly as clean and easy as a diagonal. I might have to implement Lanczos (your spelling was correct). Most things I have seen involve at least tridiagonalizing the matrix. I’d be curious to see if taking the determinant of a tridiagonal is significantly more costly than a diagonal. I’m kinda thinking it will be which pushes me to just implement Lanczos from the start.

Even with pure diagonalization, doesn’t the determinant of the matrix grow to some unmanageable level? I mean, if the average value of the items in your matrix is around 2, it seems like a 1000x1000 matrix will yield a result that effectively can’t be processed with a standard PC.

Edit: You can ignore this post.

Hmm, you may be right about going to a tridiagonal vs diagonal for general matrices. In physics, we sometimes can be obsessed with Hermitian matrices (because of their physical significance in quantum mechanics) which always diagonalize to nice real valued numbers.

And now that you mention it, I now recall that even for Hermitian matrices, my office mate was using Lanczos to get a tridiagonal which he then just dumped into a LAPACK routine to finish the job.

Do you know anything about Jacobi’s algorithm for calculating eigenvectors? I was looking at a paper and the process seems straightforward enough and fairly parallelize-able. However, I haven’t seen any specific numbers on the calculation time needed for it. The nice thing is, it finds all eigenvectors, where many algorithms focus on the dominant eigenvector.

I knew there was a reason I decided not to major/minor in math. I now remember :)

Joe

I’m going to post my new questions on a linear algebra forum. I’m pretty much getting outside the realm of actual CUDA dev. here.

Still, the results, or reasons why it is not feasible on CUDA are very interesting, so if you can, let us know.

The more I look at it, the more feasible it looks. By the end of next week I should be able to tell you (hopefully before then). I think I may have found a way to implement this, and the previous was just my misunderstanding of the linear algebra involved. I’ll keep you posted.

Joe

determinants are usually computed using LU factorization — it is the product of the diagonal entries of the U factor.

To compute eigenvectors, consider inverse iteration algorithm. If your matrix is dense, reduce it to condensed form first.

Is your matrix symmetric? Symmetric eigenvalue problem is simpler.

Update: I have resorted to using a Jacobi method for matrices for the eigen-decomposition ot the matrix. This seems to be a solid parallel algorithm at the very least. When I finish I will have to test it against matlabs implementation of eigen-decomposition and against a cpu based Jacobi method for matrices (if matlab is not using Jacobi, which is fairly likely).

Hi senorbum,

Like you, I also want to implement a data-parallelized version of eigenvector routine. I am sure many people in the CUDA community would also be in hunt for a similar solution as this problem is very common.

I also read few papers explaining jacobi method. I think Jacobi is an iterative method where the results of the previous iteration are taken as input for the next iteration (if I am right !! ). So, I just want to know that can we do an iterative stuff in parallel in GPUs?

Can you please post the code for eigenvector once you are done? I think it will be very helpful for all of us.

Best regards,
hmanak

I’m trying to wrap my brain around this as well. I’ve read in multiple places that it is potentially parallel, but I don’t see how it is when looking at code/pseudocode. I’ll keep you updated on my progress. Do you have a specific algorithm you are thinking about implementing for this type of problem?

Edit: I feel dumb. I got jacobi’s matrix algorithm mixed up with the jacobi method, which is used to solve a system of equations and is different from eigensystem computations.

For the time-being I know of the following papers/journals in people have sucessfully implemented a parallel version of eigenvalue and eigenvector problem.

http://csdl2.computer.org/persagen/DLAbsTo…FCS.1995.492469

http://www3.interscience.wiley.com/journal…ETRY=1&SRETRY=0

Actually, I don’t have access to these papers so can you look into these papers and find some meaningful information. From the Abstract of these papers it seems that they have clinched gold !!

Keep us updated.

Best regards,
hmanak