Some questions about using 10x0 and 20x0 cards for DL

Hi, I have been a researcher in AI for 20+ years but I have not used Tensorflow. (AI frameworks were not available at my time.) I am going to learn some DL stuffs and do some work in this area. Maybe later I will buy RTX Titan but not in the next three months. Could you please let me know the following?

  1. Given a model and if I want to see how it behaves under different initial parameters, will there be a problem if my desktop has two GPU of different kinds (e.g. one GTX 1060 and one RTS 2080/2080Ti or RTS Titan)?

  2. Am I correct that only when I do parallel training of the same network with the same set of initial parameters will I need to have GPU of the same model?

  3. Those 20x0 cards have Tensor Cores in addition to CUDA Cores. Are Tensor Cores helpful in speeding up the training in Tensorflow? What else is it good to buy RTX card now rather than GTX card?

I haven’t used a framework either. I have just built my own or used the cudnn library with my own supplemental algorithms for the stuff that is not in the cudnn library.

  1. NCCL is a library for message passing between devices. I haven’t used it so you might want to check the documentation of that first. I would say if you are not using that. That being said. If I made my own convolution algorithm that uses 2 gpus then I would want the gpus to be the same.
    On the opposite side of things if I had to completely concurrent networks that neither of the networks need anything from the other. Then you don’t need the gpus to be the same.

  2. Not really sure what you are saying here. Are you talking one or two networks. Hopefully the 1st part answered that question.

3)Never used a gpu with tensor cores. I am poor. But the cudnn library supports them for training. They have use seperate flags and such. I would assume it would speed up training as long as the framework supports it.