Nowadays, we can allocate all memory use cudaMallocManaged to overcome to GPU’s hardware memory limit. As such, I see there is no apparent reason why all CUDA library does not use integer 64 bits.
I would like to solve a sparse matrix slightly larger than 2^31-1, which the integer variable representing number of non-zero will overflow to negative number.
Please tell me how to use integer 64 bit for cusparse ilu0