I have already read other posts regarding the inversion of matrices using GPU possibilities. As I am quite new with CUDA, I was wondering if somebody had finally written a CUDA kernel to invert a matrix. I would be very grateful if you could share it with the CUDA community.
There is a Ph.D. student at Berkeley that has done some work on optimizing CUBLAS for nVidia…he has written a paper on doing factorizations on the GPU. Depending on what kind of problem you’re solving, perhaps you could just do an LU decomposition to get the answer you need.