PyTorch with nvidia K80?

Hey, quick question with regards to deep learning and the k80.

I have a simple network that is running torchvision’s densenet161. I want to train it to classify some images but I get a CUDA memory error when I reach 11355 MB.

My theory is that the K80 is two gpu’s with 12gb each and I can only simply utilize the 12gb when running my training. Is that so? or should I be able to access the 24gb naturally.

Thank you

and sorry if this is the wrong place to ask this question

Correct.