Requesting recommendation on selection between V100 vs T4 vs RTX2080 Ti vs Titan RTX for CUDA programming

Hi,
I need to buy NVIDIA GPUs for my research project.
My research project is not related to AI and graphics: this is just super-computing using network distribution(grid computing) using CUDA.
I think the candidates suitable for my projects are
a V100
multiple T4s
multiple Titan RTXs.
multiple RTX2080 Tis.

As I googled, it seems that multiple RTX2080 Tis or Titan RTXs is optimal solution for my project(for fast computing).
Tesla series give good power consumption efficiency but this doesn’t matter(better computation speed is more important)

You did not really state meaningful requirements except “super computing, better computation speed is more important”.

Based on this I recommend the V100 because it can do scientific (i.e. IEE854 double precision) arithmetics at high speed, whereas T4 and RTX2080 Ti can not.

See, the difference is more than an order of magnitude.

Tesla V100* 7 ~ 7.8 TFLOPS
GeForce RTX 2080 Ti estimated ~0.44 TFLOPS
Tesla T4 estimated ~0.25 TFLOPS

It would take you more than a dozen of the lesser cards to match one V100 card for the double precision arithmetics, making these the more expensive option.

source: https://www.microway.com/knowledge-center-articles/comparison-of-nvidia-geforce-gpus-and-nvidia-tesla-gpus/