Matrix Inversion

Dear CUDA Progamers,

I have already read other posts regarding the inversion of matrices using GPU possibilities. As I am quite new with CUDA, I was wondering if somebody had finally written a CUDA kernel to invert a matrix. I would be very grateful if you could share it with the CUDA community.

Thanks in advance for your help.

Cheers !

You will have to be more specific as to your requirements.

  1. Do you want a direct solver or an indirect method?
  2. Is it really the inverse that you require rather than the solution to a linear set of equations?
  3. What are the properties of the matrix in question (e.g., sparsity, symmetries etc.)

Cheers.

I usually use the LAPACK inversion routines

ZGETRF
ZGETRI

to invert a complex matrix which, a priori, has no particularity

I Thought that someone would have adapted these routines for CUDA.

I don’t know of anyone that’s implemented this yet. I’ve had a topic going for a while now (http://forums.nvidia.com/index.php?showtopic=76614), asking nVidia to complete cuBLAS and ‘cuLAPACK’ support. Haven’t heard anything back yet though…

There is a Ph.D. student at Berkeley that has done some work on optimizing CUBLAS for nVidia…he has written a paper on doing factorizations on the GPU. Depending on what kind of problem you’re solving, perhaps you could just do an LU decomposition to get the answer you need.

EDIT: Link to Volkov’s page:

http://www.cs.berkeley.edu/~volkov/