Symmetric Gauss-Seidel iterative solver implementation using CUDA

Hi,

I’d like to implement symmetric Gauss-Seidel iterative solver of system of linear equations on GPU, but I don’t know how. Please guide me in the right direction to find the best suitable parallel algorithm for this or code snippets if somebody has already implemented it.

Any help will be greatly appreciated.

Have you performed a literature search with Google Scholar (http://scholar.google.com)? A very quick search returned a pointer to this paper, for example:

http://193.49.213.140/docs/sofa-pgs-hpcc09.pdf
Hadrien Courtecuisse and Jeremie Allard
Parallel Dense Gauss-Seidel Algorithm on Many-Core Processors

I have never used Gauss-Seidel so cannot tell whether this paper applies to your use case.

A few years ago I found an implementation of Gauss-Seidel which was being used to matrix inversion:

This paper mentions it:

[url]

[/url]

And believe the same author at one point had posted the code which did indeed work to directly invert a positive symmetric matrix using Gauss-Seidel. If I really needed to I could search my old projects to find that source.

Not sure if that applies to what you want, but it is a start.

Thank you guys for replies!

Actually after a little investigation I’v understood that for fine grain parallelism for Gauss-Seidel solver I have to use red/black algorithm (or red/black numbering). Do you have any experience with it?

Say there are following input parameters for elemental CUDA-kernel:
vals - one dimensional array (major row-ordering) which represents matrix A (Ax = rhs),
rhs - array of the right-hand-side
x_prev - solution vector on previous iteration (initial guess for the first iteration), and
x - new solution vector

How can I implement this kernel using red/black numbering?