RTX A6000 ADA - no more NV Link even on Pro GPUs?

As someone who’s roots are in the VFX Industry I think this is a massive mistake. I’ve been a big fan of NVIDIA and and based a lot of my experience in utilising CUDA in this industry.

I get the market segmentation for AI workloads, you don’t want a cheaper high powered GPU that can have NVLink to take the competition away from H100s etc. But what about people in VFX, Graphics, Graphics + ML ?

We can’t pool memory with P2P, we have huge scenes with different graphical representations to render using the power of two RTX 6000s and now we have to fit them into either GPU’s 48GB VRAM. It’s just sad.
It’s not like there is any ‘more expensive’ option as a backup. If we could afford a H100, sure it has more memory, but it’s architecture is designed for AI workloads, not graphics + AI/ML, which is something that was so great about the A/RTX 6000 series cards.

I see people on this thread basically showing the reason why this was done, because they know 2x RTX 6000s would make cheaper inference and/or training GPU’s with a model parallel memory split. This is annoying, but it would be great if NVIDIA could remember it’s roots, in graphics, graphics + ML workloads. Otherwise I might aswell just use 2x 4090’s sure I don’t get P2P, but P2P and and extra 24GB of memory (at lower bandwidth, sure it shaves some TDP), 10 theoretical TFLOP/S of fp16/32 difference for a lower clocked GPU is not worth the massive price difference.

Just to put it in perspective the 4090 retailed in 2022 for $1599 the RTX 6000 for $6799…