Load sparse matrix in CUDA

I want to use cuSOLVER cusolverSpcsrlsvlu() to solve sparse linear system: Ax=b, A-sparse. I would like to know how can I load my .mtx format sparse matrix in CUDA?

In addition, I notice the definition of cusolverSpDcsrlsvlu below has a [host] keyword, I would like to know what this mean? Does the [host] mean the function running in CPU?

cusolverStatus_t cusolverSpDcsrlsvlu[Host]
(cusolverSpHandle_t handle, int n, int nnzA, const cusparseMatDescr_t descrA, const double *csrValA, const int *csrRowPtrA, const int *csrColIndA, const double *b, double tol, int reorder, double *x, int *singularity);

I think I found the answer to my first question: http://math.nist.gov/MatrixMarket/mmio-c.html

The answer to your second question is contained in the documentation:

https://docs.nvidia.com/cuda/cusolver/index.html#naming-convention

"where cuSolverSp is the GPU path and cusolverSpHost is the corresponding CPU path. "

So, yes, the host version runs on the CPU.

The path will impact the location (host or device) of certain function parameters, so the parameters usually have a table/legend which has separate columns for the host and device versions.

I would like to know whether the host version will not have any part running on GPU? I am still confuse about it. If only host version is provided, in the input table of documentation, I should only look at Host MemSpace, but not cusolverSp MemSpace?

That is correct. The host version runs on the CPU, not the GPU. If you are using the host version, then use the host memspace to determine where the parameters should be, not the cusolverSp memspace.

Thank you!