GPU resource needed for training 10000 models


I am a Big Data Science Masters student, I am curious to know more about what kind of GPU resource I would requires to train around 10000 models simultaneously. For example, there are 10000 users whose behavior is supposed to be monitored using Deep neural network. I have two baselines

  1. Create a single model for all users
  2. Create one model for every individual user
    I’d prefer 2 over 1 if there exist a GPU which can handle multiple models. I understand the cost and trade-offs behind this.
    Any help would highly be appreciated!
    Thanks in advance!

What type of model is it? Algorithms / library used?

@jimscott LSTM/GAN model and the library used is keras.
It’s basically user behavior analytics