Interpreting nvidia-smi output


We have a box presumably(as I was told) with 4xK80 GPU’s. But the out of nvidia-smi command tells a different story, it says we have 8. I am misinterpreting this?

nvidia-smi -L

GPU 0: Tesla K80 (UUID: GPU-4376cf29-89af-xxx…)
GPU 1: Tesla K80 (UUID: GPU-7b96af99-9d86-xxx…)
GPU 2: Tesla K80 (UUID: GPU-6166e2ed-a2d0-xxx…)
GPU 3: Tesla K80 (UUID: GPU-2969a997-8837-xxx…)
GPU 4: Tesla K80 (UUID: GPU-a1dda04e-1c02-xxx…)
GPU 5: Tesla K80 (UUID: GPU-179a403b-8529-xxx…)
GPU 6: Tesla K80 (UUID: GPU-33e731dd-fee2-xxx…)
GPU 7: Tesla K80 (UUID: GPU-e48856c6-ff13-xxx…)

So does the query command tell me as below:

nvidia-smi -i 0 -q

==============NVSMI LOG==============

Timestamp : Mon Aug 22 13:36:18 2016
Driver Version : 352.93

Attached GPUs : 8
GPU 0000:06:00.0
Product Name : Tesla K80
Product Brand : Tesla
Display Mode : Disabled
Display Active : Disabled
Persistence Mode : Disabled
Accounting Mode : Disabled
Accounting Mode Buffer Size : 1920

Please advise as am I confused. I don’t know of a rack mount server that can pack 8xK80’s in a 1U form factor, hence I want to correct my interpretation.


A K80 has 2 GPU devices in a single K80. From a programming perspective they are treated as separate GPUs and nvidia-smi reports them as 2 separate GPUs (for each).

Great thank you for the clarification. I get it now!!

Does it mean that these kind of devices can be used for multi-GPU software developement? Can they be completely treated as two independent devices on a single node? Thank you in advance!