Hello,
Has anyone looked at the paper (attached with this post) for calculating SVD on CUDA.
I am looking at algorithm 2 that computes the bidiagonalization of the input matrix.
Line 5: says eliminate A(t:m, t) and update Q(1:m, t).
I am guessing we need to update A by calculating
I - sigma * u(i) * u(i)’,
which is the classic householder transformation expression. I am not sure about this as well as the section above the algorithm pseudocode does not mention any storage for the identity matrix.
Now, I am a bit confused as to how the left transformation matrix Q is updated. I see the expressions for update of A and Q but do noot see jow that expression can be used to update one column of this Q matrix (Q(1:m, t)).
Has anyone looked at this paper yet? Maybe they can help me understand what has to be done here.
Thanks,
xarg
SVD_on_CUDA.pdf (180 KB)