CUDA is a parallel programming environment based on a subset of C++(2011). As such it is suited for computations of any kind that can benefit from massive parallelism. The typical parallelism in a CUDA accelerated app is on the order of ten thousand threads, versus a dozen threads or so with CPUs.
My background is skewed heavily toward scientific computation, not financial computation. I don’t know what solvers are typically used in rate computation. You may be able to find open-source implementations that you can study. I am thinking a package like R might offer a rate computation? On the face of it, the function whose roots one has to find looks to compute the rate looks reasonably well behaved, so to first order any commonly-used solver would probably work. You could try simple bisection first, and then progress to a hybrid method like Brent-Dekker to see whether that works better.
I am not sure where massive parallelism would come in. Maybe in your use case you need to consider many thousands of scenarios, and compute the interest rate for each?
To get started with CUDA any modern moderately-priced consumer GPU will do, e.g. a GTX 1060. I use my equipment 24/7 and do use “professional” systems built with Xeons and Quadros, but a casual CUDA user does not need to go that (expensive) route.