I’d like to take a dense matrix A and add it to a sparse matrix C, but apply the sparsity of C element-wise to A. Pretty much the cusparseSDDMM operation, which is:

a(op(A) x op(B)) o spy(C) + bC, where o is the hadamard element-wise operation on the sparsity of C

but i’d like to set B equal to the identity matrix. Is there a way to do this without creating a dummy B Dense matrix that is just 1’s on the diagonal?

No, it is not possible with the current API. Using cusparseSDDMM () is the best option right now. Is there any specific application that you want to support?

Actually I can’t see how this would work and would appreciate some help. I’m planning on rewriting this with cusparse using the c api, but in python i currently have:

kernel_mat is a dense large matrix
self._weights is a sparse matrix (multiple classes)
batch_y is a sparse matrix
grad is dense matrix the same dimension as batch_y

I need to add the matrix grad in to the sparse weights with those batch offsets and have the learning rate multiplier applied. This would be easy if I could apply the sparsity pattern of C (self._weights), but are there any other tricks I can use in the cusparse api?

Unfortunately, this is not possible. The only API that supports the application of the sparsity pattern of the output matrix is SDDMM. Any other computation requires decomposing the problem in order to exploit cuSPARSE API