In general I have found that cuBLAS is fast enough where I just always use the dense format for matrices. At this point I am trying to determine the threshold(%) of zero-entries (when I may be able to know this in advance) which would determine which format (and which library, cuBLAS or cuSPARSE) to use.

There have been other posts which claim that equivalent operations on moderate sized matrices (with lets say 50% zero entries) are still faster using cuBLAS than cuSPARSE, even though the total number of operations for the dense format is higher.

I will be receiving matrix data in CSC format, and am tempted to use the cuSPARSE csc2dense() conversion, then use the cuBLAS functions rather than muck around with all the additional pointers used for the csc format.

Obviously there are limits to the dense format, but I have yet to see an input set which I could not handle in dense format.

I am sure such data exits, but most scientists want to believe that they should always use the sparse format. On the CPU the conversion to sparse format saves time, but on the GPU this seems not to be as true.

So if I am able to estimate the % of zero entries, and am operating on matrices with dimensions of (30,000 x 10,000), what threshold of zero entries will make dealing with the csc sparse format necessary?