Use all 24 Gb for one application on the K80 GPU

The Tesla K80 has 24 Gb but, as far as I understand, it is shared between two GK210 GPUs on the same card. So actually it’s a card with two 12 Gb GPUs.
But is it still possible to use all 24 Gb for one application fx training a large model in PyTorch or Keras?

It’s possible to use the memory from the other device. To do so at the CUDA level simply use cuda peer functionality. This is discussed in many places and there are various sample codes demonstrating peer memory usage.

I won’t be able to explain how to do that in Pytorch or Keras. In general, making bulk accesses to peer memory will be much slower than using the device’s own memory. The peer memory accesses flow at approximately the PCIE rate (for this case). So that will be something like 8GB/s vs. over 100GB/s. It’s unlikely to be interesting for general usage.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.