I am running model training for a deep neural network using Tensorflow, Python, and a 7GB dataset with 1 illion samples and 1000 features.
Suppose the microprocessor model, RAM size, HDD size, motherboard, etc. are the same.
One machine is using GeForce RTX 3090, and another machine is using GeForce RTX 4060Ti.
Which machine will be able too finish the task in a shorter amount of time?
How much shorter?
Explain why.