# cuSparse element-wise mask Dense to Sparse

Hello,

I’d like to take a dense matrix A and add it to a sparse matrix C, but apply the sparsity of C element-wise to A. Pretty much the cusparseSDDMM operation, which is:

a(op(A) x op(B)) o spy(C) + bC, where o is the hadamard element-wise operation on the sparsity of C

but i’d like to set B equal to the identity matrix. Is there a way to do this without creating a dummy B Dense matrix that is just 1’s on the diagonal?

in other words i’d like to do:

a(op(A)) o spy(C) + bC

No, it is not possible with the current API. Using `cusparseSDDMM ()` is the best option right now. Is there any specific application that you want to support?

Hello, I apologize, I didn’t realize I got a reply.

I was writing my own reply and actually realized we do have a dense matrix multiplication before the sparsity step.

Sorry, the code made it hard to see.

Thanks

Actually I can’t see how this would work and would appreciate some help. I’m planning on rewriting this with cusparse using the c api, but in python i currently have:

``````grad = kernel_mat @ self._weights - batch_y
self._weights[batch_offset : batch_offset + batch_size] += -lr*grad
``````

kernel_mat is a dense large matrix
self._weights is a sparse matrix (multiple classes)
batch_y is a sparse matrix
grad is dense matrix the same dimension as batch_y

I need to add the matrix grad in to the sparse weights with those batch offsets and have the learning rate multiplier applied. This would be easy if I could apply the sparsity pattern of C (self._weights), but are there any other tricks I can use in the cusparse api?

Unfortunately, this is not possible. The only API that supports the application of the sparsity pattern of the output matrix is SDDMM. Any other computation requires decomposing the problem in order to exploit cuSPARSE API

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.