Strange output by nvidia-smi topo

When I check GPUDirect RDMA capability on uni-processor environment, nvidia-smi shows a strange result.

[root@magro ~]# nvidia-smi topo -mp
	GPU0	mlx5_0	mlx5_1	CPU Affinity
GPU0	 X 	SYS	SYS	0-23
mlx5_0	SYS	 X 	PIX	
mlx5_1	SYS	PIX	 X 	

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe switches (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing a single PCIe switch

I use SYS-1029GP-TT that support 1x Skylake-SP processor which also controls the entire PCIe-bus by its built-in controller.
https://www.supermicro.com/products/system/1U/1019/SYS-1019GP-TT.cfm

So, it should not have any chance for QPI traversal, however, nvidia-smi says the connection between GPU0 and mlx5_X traverse SMP connection.

Any ideas?

skylake Xeon CPUs from intel have introduced a new feature (compared with previous Xeon CPUs): multiple PCIE root complexes in a single socket.

As a result of this, the PCIE devices connected to a skylake CPU are not necessarily on the same logical fabric, and connections between devices may effectively flow over a “SMP” connection, which is the tool’s way of saying that it is flowing from one root complex to another root complex. In the case of skylake, those root complexes can be distinct but in a single Xeon.