I have already read other posts regarding the inversion of matrices using GPU possibilities. As I am quite new with CUDA, I was wondering if somebody had finally written a CUDA kernel to invert a matrix. I would be very grateful if you could share it with the CUDA community.

I don’t know of anyone that’s implemented this yet. I’ve had a topic going for a while now (http://forums.nvidia.com/index.php?showtopic=76614), asking nVidia to complete cuBLAS and ‘cuLAPACK’ support. Haven’t heard anything back yet though…

There is a Ph.D. student at Berkeley that has done some work on optimizing CUBLAS for nVidia…he has written a paper on doing factorizations on the GPU. Depending on what kind of problem you’re solving, perhaps you could just do an LU decomposition to get the answer you need.