RTX A6000 ADA - no more NV Link even on Pro GPUs?

Hi Nvidia, I’m working on a VR ray tracing title and wanted to add support for NV Link to boost performance for remote cloud rendering situations, but it appears that even your professional GPU lineup no longer has the NV Link connector:

Is NV Link truly a dead technology? I mostly wanted it to double the path tracing performance in VR, but I’m wondering to what degree NV Link is even necessary or beneficial for such workloads, as the two eyes can be rendered completely independently. In that case I may be better off simply buying two 4090s for this situation. Any caveats there?

Hello belakampis1,

NVLink is far from dead. If you check out the Information on our pages you will see that the technology is still in use and developing.

The fact that it is not supported on Desktop or Workstation GPUs any longer does not mean the technology is discontinued.

And basically you are correct, NVLink is most beneficial if you need to transfer huge amounts of data between GPUs, like Deep Learning tasks in HPC settings. On the other hand, for VR applications, even if there were a need to transfer whole frames between GPUs, PCIe 4 offers enough bandwidth even for 4k at 120Hz and then some.

So if your engine supports Multi-GPU rendering like alternate frames, you should be fine with a 4090. But you should take into account that Desktop GPUs are not designed for any kind of server setups.

I hope this helps.

Dear Markus,

this is a very-very unfortunate decision of Nvidia, we are scientists and fully utilizing NVlink in our duo-RTX A6000 setup for machine learning.
So, basically, no possibility for us to move to A6000 Ada L. Gen., and pool all of their 96Gb combined memory together, to train GANs, transformers, or just big ANNs of other types?

What would you recommend then? How with limited resources can we build a similar ML-PC with a big enough video RAM on board, but most importantly - pooled together?

Thanks in advance

Welcome @alex1988d to the NVIDIA developer forums.

I cannot say much about any decisions or the possible impact of it. But of course there are alternatives.

The simplest being to just make use of PCIe. While not as fast as NVLINK it still allows to utilize CUDA across multiple devices the same way that NVLINK does.

I can imagine that buying a full-fledged DGX H100 is beyond the budget of smaller companies or educational institutions. For that there are alternatives like Cloud service providers, for example the NVIDIA partner Cyxtera. Check the DGX page for DGX as a service.

Using virtual GPUs is another option, there are also service providers who enable researchers to make use of the latest HW.

Last but not least you also have the possibility to test drive AI development on H100 in our Launchpads.

Maybe some of these options are a viable alternative for you.

I hope I could help.

Thanks!

Thanks for your reply Markus,
We have never tried to pool both GPUs VRAM together using PCIe x16 Gen4, but my guess is even if it works without tweaking much - it would slow down the learning rate significantly.
Cloud services we can’t use for 2 reasons - 1. data leaks and 2. constant modifications of ANNs with iterative development - in other words, we need a machine that stands just here and we are not limited by time of learning and/or amount of tasks, like in AWS or alternatives offer.
Yes, DGX is too expensive, but it would be an ideal variant for us of course.
We need smth like it was before - 2 GPUs 5k$ each + 5k$ for the rest of the machine and we have a nice 15 000$ rig with 96Gb VRAM.
I guess we will just stay with RTX A6000 further as no alternatives so far.
The next step of the upgrade, despite the price, would be dual Hopper H100 PCIe, as it was promised they do harbor NVlink.
Best,
Alex

1 Like

I agree with @alex1988d, the Cloud GPU service can never be an alternative to NVLINK. I think NVidia removed that feature thinking about the general consumer market. But also, there are specific applications and setup of the GPUs where faster data transmission in between GPUs are crucial. NVidia could still keep the NVLink feature as Pro version.

1 Like