nvidia-smi reports 3 GPUs but deviceQuery reports only 2

I started with 2 1080 ti GPUs and my machine recognized both.
I recently added a 1060 for my monitors so that I can use the 2 1080 ti for ML.
Now, nvidia-smi reports all 3 GPUs but deviceQuery only reports 2.
It is missing one of the 1080 ti. Other programs also miss the 2nd 1080 ti.

I’m on Ubuntu 16.05 lts.

Here is the output:

bash:/usr/local/cuda/extras/demo_suite$<b>./deviceQuery</b> 
./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 2 CUDA Capable device(s)

Device 0: "GeForce GTX 1060 6GB"
  CUDA Driver Version / Runtime Version          9.0 / 9.0
  CUDA Capability Major/Minor version number:    6.1
  Total amount of global memory:                 6065 MBytes (6359285760 bytes)
  (10) Multiprocessors, (128) CUDA Cores/MP:     1280 CUDA Cores
  GPU Max Clock rate:                            1759 MHz (1.76 GHz)
  Memory Clock rate:                             4004 Mhz
  Memory Bus Width:                              192-bit
  L2 Cache Size:                                 1572864 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 2 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 2 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

Device 1: "GeForce GTX 1080 Ti"
  CUDA Driver Version / Runtime Version          9.0 / 9.0
  CUDA Capability Major/Minor version number:    6.1
  Total amount of global memory:                 11172 MBytes (11715084288 bytes)
  (28) Multiprocessors, (128) CUDA Cores/MP:     3584 CUDA Cores
  GPU Max Clock rate:                            1582 MHz (1.58 GHz)
  Memory Clock rate:                             5505 Mhz
  Memory Bus Width:                              352-bit
  L2 Cache Size:                                 2883584 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 2 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 129 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
> Peer access from GeForce GTX 1060 6GB (GPU0) -> GeForce GTX 1080 Ti (GPU1) : No
> Peer access from GeForce GTX 1080 Ti (GPU1) -> GeForce GTX 1060 6GB (GPU0) : No

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 9.0, CUDA Runtime Version = 9.0, NumDevs = 2, Device0 = GeForce GTX 1060 6GB, Device1 = GeForce GTX 1080 Ti
Result = PASS

bash:/usr/local/cuda/extras/demo_suite$ <b>nvidia-smi</b>
Fri Jun 22 23:57:03 2018       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.130                Driver Version: 384.130                   |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 106...  Off  | 00000000:02:00.0  On |                  N/A |
| 49%   54C    P0    25W / 120W |   3230MiB /  6064MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GTX 108...  Off  | 00000000:03:00.0 Off |                  N/A |
| 28%   37C    P8     9W / 250W |      1MiB / 11172MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   2  GeForce GTX 108...  Off  | 00000000:81:00.0 Off |                  N/A |
| 28%   37C    P8    16W / 250W |      2MiB / 11172MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      1298      G   /usr/lib/xorg/Xorg                          2327MiB |
|    0      2889      G   compiz                                       644MiB |
|    0     26451      G   ...-token=93429DDE62D483B80BDFFE30C4640FA2   256MiB |
+-----------------------------------------------------------------------------+

bash::/usr/local/cuda/extras/demo_suite$ nvidia-smi -L
GPU 0: GeForce GTX 1060 6GB (UUID: GPU-636542da-27ae-8916-0a0e-2dc3959a3153)
GPU 1: GeForce GTX 1080 Ti (UUID: GPU-950c1e6c-6fd6-aa2e-bc1f-eece2062a980)
GPU 2: GeForce GTX 1080 Ti (UUID: GPU-de2a9056-c77c-cc20-4784-2a91ad6eac46)

How are the two GTX 1080 Ti connected? What kind of motherboards slots?

The strange thing (to me) in the output from nvidia-smi is that the bus IDs for the first two cards are 2 and 3, but for the last (2nd GTX 1080 Ti) the bus ID is 129 (0x81), suggesting that this GPU is somehow connected differently than the first two. I wonder whether that is causing issues.

I assume you checked that there is adequate power supply for all three GPUs (your PSU should be rated for 1000W or more).

perhaps you have set the environment variable CUDA_VISIBLE_DEVICES

txbob appears to be a fan of Occam’s razor :-) Yes, that’s the first thing you would want to check.

Thx! txbob was correct. I had set CUDA_VISIBLE_DEVICES incorrectly. The 1060 has gpu id 1 and I thought it would have id 0. I was setting CUDA_VISIBLE_DEVICES to 1,2 so my ml tools would only access the 1080s.