If you want to take advantage of hardware texture interpolation, nothing has changed, other than that texture references are deprecated now and will go away real soon, so you would want to change the code to use texture objects if based on texture references. Since texture objects were introduced in 2013, which is more than 6 years ago, your code may have been written using texture objects from the start.
The hardware texture interpolation performs its computation with 1.8 fixed-point arithmetic (see relevant appendix in the CUDA programming manual). Which is coarse granularity leading to limited-quality results. I am aware that there is a trend in CT for example towards higher-quality images requiring more fine-grained interpolation, which would motivate the use single-precision FMAs to perform the interpolation in software. I do not have sufficient domain knowledge to gauge how pronounced this trend is.
While use of textures was often necessary in early GPUs to maximize memory throughput, this is much less true with modern GPUs which have greatly improved memory subsystems that allow for improved caching of read-only data even without the use of textures. As one bit of anecdotal evidence, I have not used textures in half a dozen years or so.
What you might want to do is create a little experiment where you use your legacy code to establish baseline performance and then code a new version using software interpolation and without use of textures. Study the Best Practices Guide on how to maximize memory throughput and also make sure to use FMAs as aggressively as possible. I would also recommend use of the latest CUDA toolchain, as I still see evidence of continuous incremental improvements in code generation in the compiler. Obviously you would want to use modern hardware, say Pascal or a later architecture (Volta, Turing, Ampere).