Can cusparse ilu0 support 64 bit integer

Nowadays, we can allocate all memory use cudaMallocManaged to overcome to GPU’s hardware memory limit. As such, I see there is no apparent reason why all CUDA library does not use integer 64 bits.

I would like to solve a sparse matrix slightly larger than 2^31-1, which the integer variable representing number of non-zero will overflow to negative number.

Please tell me how to use integer 64 bit for cusparse ilu0

Currently, all cuSPARSE Generic APIs support 64bit interfaces. Our goal is to have 64bit throughout all our Math libraries.

You need need a particular request that is not currently supported, please submit it along with information on your use case (e.g., size, target architecture, data type, etc) to Math-Libs-Feedback@exchange.nvidia.com