Can NVLink combine 2x GPUs into 1x Big GPU?

I consider installing 2x Quadro RTX 8000s in my deep learning machine and connecting them with NVlink. But, I was wondering. Can I really increase GPU memory to 96 GB GDDR6 with 2x RTX 8000s by NVLink? I have high volume training images and my current GPU GTX 1080ti has already been short of RAMs. 96GB of GPU RAM is plenty of memory for my training images. I am also wondering that Tensorflow can access to the NVlink Quadro RTX 8000s as 1x GPU. Can Tensorflow communicate with the NVLink GPUs as 1 big GPU? or Does NVLink work with CUDA and Deep learning frameworks such as Tensorflow or PyTorch?

Hey!
There is no answer to this question. At least, unfortunately, I could not find one. Have 2x Titan RTX yet wasn’t able to make it possible, nor to find some information on how to do so :)
If anybody can help, it will be very much so appreciated!

Cheers,
Magic

Hi,m.rokosz

Thanks for your comment, I found it out. I contacted to Boxx (NVIDIA GPU Workstation Vendor) and they told me that 2x RTX 2080Ti could not be integrated into 1x GPU by NVLink. NVlink only supports fast GPU connection between GPUs but not integration. Well, they told me about it but I like to test it to make sure it is correct or not but no money for the 2x RTXs.

As correctly noted above, NVLink provides a fast interconnect between GPUs, but does not aggregate those GPUs into a single logical device. That said, DL training can usually be efficiently spread across multiple GPUs by increasing the minibatch size and distributing different sets of images to each GPU. Horovod is a third-part tool provided in our containers that simplifies the task of parallelizing over multiple GPUs (or even multiple hosts). Alternatively you can use TF’s Distribution Strategies approach.