Dear Markus,
this is a very-very unfortunate decision of Nvidia, we are scientists and fully utilizing NVlink in our duo-RTX A6000 setup for machine learning.
So, basically, no possibility for us to move to A6000 Ada L. Gen., and pool all of their 96Gb combined memory together, to train GANs, transformers, or just big ANNs of other types?
What would you recommend then? How with limited resources can we build a similar ML-PC with a big enough video RAM on board, but most importantly - pooled together?
Thanks in advance