I couldn’t give you a definitive answer. Tesla’s tend to be the high end of AI training, but there are old models and new models, and so an old Tesla might not be as good as an RTX 3090. I would in general be wary of used Teslas unless you know it wasn’t used for bitcoin mining. You would then have to find some sort of review and see how it compares to something like an RTX 3090.
Among all of the choices, if you are training a large enough model, then there won’t be any substitute for lots of video RAM. The amount of VRAM always goes up with GPUs which are intended for AI training, but it might be that something as a simple RTX 3080 has as much RAM as you need for your situation. I have no way of knowing how you might estimate the amount of VRAM needed for a given situation. FYI, the “Titan” series branding is sort of a cross over point between being a desktop GPU and an AI training GPU, very useful for both situations.
The branding for Titan has ended, and in the RTX 3000 series, the RTX 3090 is essentially what you could call the continuation of the “Titan” series, just without the name.
The Tesla series has always been for high end AI and never used as a desktop card.
Added note: Older hardware might not be able to work with newer CUDA releases. So if you do pick an older card you should consider if drivers make available the features you need.