I’ve seen that calling cuBLAS from kernels is deprecated since CUDA 10.0. Is it possible, and is there an ecosystem for libraries of device functions? For example, I have a problem where I am doing hundreds of thousands of independent estimations via an EM algorithm, each of which involves one LU decomposition of a matrix, and then solving a linear system some number of times. I have written functions to do these, but it seems odd that I wouldn’t have ready access to BLAS or LAPACK type libraries. I have tried searching, but it is difficult to distinguish between, for example, GPU-accelerated linear algebra libraries, and libraries to be called from device code.
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Call cuBLAS from device function | 1 | 691 | November 15, 2019 | |
Device cublas on cuda 10.1 | 1 | 1744 | June 14, 2019 | |
Using a CUDA library call as a device function instead of a kernel launch | 3 | 546 | April 2, 2018 | |
Calling cuBLAS from device? | 4 | 765 | April 26, 2023 | |
Current state of Device Extension libraries | 0 | 706 | March 3, 2023 | |
dynamic parallelism with cublas in cuda 10.1 | 5 | 807 | June 29, 2019 | |
any lin alg, sig/im-pro and NN device functions available? | 2 | 523 | November 21, 2019 | |
CUBLAS vs OpenCL | 1 | 13319 | April 28, 2009 | |
CUDA Libraries | 4 | 1796 | June 22, 2012 | |
Question about cuBLAS library internal kernels and their memory location | 0 | 372 | March 27, 2019 |