Installing driver for Tesla K80

I install cuda-9 for tesla k80. while doing nvidia-smi it is showing something like this

Tesla K80 Off | 00000000:00:04.0 Off | 0 |
| N/A 31C P0 73W / 149W | 0MiB / 11441MiB | 100% Default

But i think k80 device supports 24 GB memory. i am only able to see 11 GB only.

i installed these version
sudo apt install nvidia-384 nvidia-384-dev

cuda_9.0.176_384.81_linux.run

K80 has 24GB but it is split between 2 GPUs. K80 is a device with 2 GPUs (logical devices) combined onto 1 board. So this is normal, each device has 12GB.

If you are only seeing 1 GPU device then it may be because you are using a cloud service.

Hello Robert_Crovella,
Thanks for your reply.
yeah i wondering that it is supposed to be 24 GB but only showing aprox. 12 GB. Yes I am using Google cloud service and only able to see one device right now . Is there any way to see both devices if i miss something in installation of cuda and nvidia driver? As i opted for K-80 with full configuration. Or i need query to cloud service provider?

thanks in advance.

That is a function of how your cloud service provider is offering machine (instance) types.

https://cloud.google.com/compute/all-pricing#gpus

“Note: NVIDIA® K80® boards contain two GPUs each. The pricing for K80 GPUs is by GPU, not by the board.”

You should research with GCP how to get a machine with 2 K80 GPUs in it, if that is what you want.