ConnectX-7 NIC's no longer appear

On one of our four DGX Spark’s the ConnectX-7 NIC’s no longer appear. After reboot, I can find the follow messages related to the PCI address the ConnectX-7 normally appears on:

galvanick@spark-79c0:~$ sudo dmesg |grep -e ‘pci 000[0,2]:00’
[ 0.074254] pci 0000:00:00.0: [10de:22ce] type 01 class 0x060400 PCIe Root Port
[ 0.074272] pci 0000:00:00.0: PCI bridge to [bus 01-0f]
[ 0.074308] pci 0000:00:00.0: broken device, retraining non-functional downstream link at 2.5GT/s
[ 1.077183] pci 0000:00:00.0: retraining failed
[ 1.077233] pci 0000:00:00.0: PME# supported from D0 D3hot D3cold
[ 1.180226] pci 0000:00:00.0: PCI bridge to [bus 01-0f]
[ 1.180245] pci 0000:00:00.0: Max Payload Size set to 512/ 512 (was 128), Max Read Rq 256
[ 1.180988] pci 0002:00:00.0: [10de:22ce] type 01 class 0x060400 PCIe Root Port
[ 1.181004] pci 0002:00:00.0: PCI bridge to [bus 01-0f]
[ 1.181039] pci 0002:00:00.0: broken device, retraining non-functional downstream link at 2.5GT/s
[ 2.186001] pci 0002:00:00.0: retraining failed
[ 2.186048] pci 0002:00:00.0: PME# supported from D0 D3hot D3cold
[ 2.237704] pci 0002:00:00.0: PCI bridge to [bus 01-0f]
[ 2.237722] pci 0002:00:00.0: Max Payload Size set to 512/ 512 (was 128), Max Read Rq 256
[ 3.433436] pci 0000:00:00.0: Adding to iommu group 6
[ 3.433880] pci 0002:00:00.0: Adding to iommu group 7

The bridge devices are present in the device tree but not the NIC itself:

galvanick@spark-79c0:~$ sudo lspci
0000:00:00.0 PCI bridge: NVIDIA Corporation Device 22ce (rev 01)
0002:00:00.0 PCI bridge: NVIDIA Corporation Device 22ce (rev 01)
0004:00:00.0 PCI bridge: NVIDIA Corporation Device 22ce (rev 01)
0004:01:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd Device a810
0007:00:00.0 PCI bridge: NVIDIA Corporation Device 22d0 (rev 01)
0007:01:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. Device 8127 (rev 05)
0009:00:00.0 PCI bridge: NVIDIA Corporation Device 22d0 (rev 01)
0009:01:00.0 Network controller: MEDIATEK Corp. Device 7925
000f:00:00.0 PCI bridge: NVIDIA Corporation Device 22d1
000f:01:00.0 VGA compatible controller: NVIDIA Corporation Device 2e12 (rev a1)

We have done apt update/dist-upgrade and fwupdmgr update with reboot to no avail. Would like guidance on how to proceed with the hardware to restore the ConnectX-7 functionality.

I did a field diagnostic, and oddly it passed.

nvidia-bug-report.log.gz (366.8 KB)

I’m experiencing the exact same issue. Nvidia support suggested the following steps:

  1. Perform a full cold power cycle of the DGX Spark system:

    • Completely power off the system.

    • Disconnect the power cable.

    • Wait approximately 30–60 seconds.

    • Reconnect power and boot the system again.

  2. Ensure that a supported 200 Gbps QSFP cable is connected properly before booting.

I followed these instructions precisely, but the ConnectX-7 functionality is still not working.

Let me know if there are further steps to try.

Yeah I have tried that already, thank you for the idea though.

I opened a support ticket but got asked to post here instead. I’m hoping I can get some guidance - or if its not addressable in the field, then RMA instructions.

I tried to connect two boxes using methods outlined at dgx-spark-playbooks/nvidia/connect-two-sparks at main · NVIDIA/dgx-spark-playbooks · GitHub. But ibdev2netdev command return nothing. I am running Ubuntu 24.04 (Noble) with kernel 6.17.0-1008-nvidia (ARM64), is that what your boxes running on?

Hello NVIDIA Support,

I am reporting a connectivity failure on my DGX Spark Minis involving the ConnectX‑7 NIC and the QSFP cable that was shipped to me by NVIDIA.

Summary of the issue

The ConnectX‑7 NIC initializes correctly at the PCIe and driver level, but the QSFP port never powers up. As a result, the NIC continuously reports “Cable unplugged,” and the NVIDIA Spark Mellanox Firmware Manager refuses to proceed with firmware installation, stating that the cable is not connected.

Key technical findings

NIC is fully enumerated and operational at PCIe level

PCIe link trains successfully at 32.0 GT/s x4

Firmware loads correctly (version 28.45.4028)

mlx5_core driver binds without errors

NIC reports the module as unplugged even when the cable is insertedRepeated dmesg entries show:

Port module 0: Cable unplugged

Port module 1: Cable unplugged

NIC reports insufficient power when attempting to power the moduleMultiple occurrences of:

mlx5_core: Detected insufficient power on the PCIe slot (27W)

Firmware Manager refuses to install firmware due to missing cable detection

Cable is not connected for ConnectX7. Please connect the cable for firmware installation.

Cable details (shipped by NVIDIA)

Manufacturer: Amphenol

Part number: NJAAKK‑N911

Type: Passive DAC, 0.4m

Assessment

Based on the NIC logs, the ConnectX‑7 is rejecting the module because it does not detect a valid QSFP112‑class cable. The “insufficient power” and “cable unplugged” messages are consistent with the NIC refusing to power an unsupported or unqualified module.

Since this cable was provided directly by NVIDIA, I need confirmation on whether the NJAAKK‑N911 DAC is qualified for ConnectX‑7 on DGX Spark Mini. If it is not, please advise on the correct NVIDIA‑qualified QSFP112 cable (e.g., MCP7H00 or MCP7H50 series) for Spark‑to‑Spark connectivity.

Request

Please confirm:

Whether the Amphenol NJAAKK‑N911 cable is officially supported for ConnectX‑7 on DGX Spark Mini.

If not supported, please provide the correct NVIDIA‑qualified QSFP112 cable part number.

Whether the “27W insufficient power” condition indicates a module‑qualification failure or a hardware issue.

Here is the system output:

dgxspark@spark-dc77:~$ sudo dmesg | grep -Ei ‘mlx|mellanox|connectx|mlx5’
[ 1.475098] integrity: Loaded X.509 cert ‘DGX_Mellanox_Driver: ff019ca15cf6d937483324223e343e4d858aa7c2’
[ 2.253009] mlx5_core 0000:01:00.0: enabling device (0000 → 0002)
[ 2.253150] mlx5_core 0000:01:00.0: firmware version: 28.45.4028
[ 2.253172] mlx5_core 0000:01:00.0: 126.028 Gb/s available PCIe bandwidth (32.0 GT/s PCIe x4 link)
[ 2.612464] mlx5_core 0000:01:00.0: Rate limit: 127 rates are supported, range: 0Mbps to 195312Mbps
[ 2.612856] mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048)
[ 2.618206] mlx5_core 0000:01:00.0: Flow counters bulk query buffer size increased, bulk_query_len(8)
[ 2.624838] mlx5_core 0000:01:00.0: Port module event: module 0, Cable unplugged
[ 2.625741] mlx5_core 0000:01:00.0: mlx5_pcie_event:326:(pid 11): Detected insufficient power on the PCIe slot (27W).
[ 2.638862] mlx5_core 0000:01:00.0: mlx5e: IPSec ESP acceleration enabled
[ 2.814846] mlx5_core 0000:01:00.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 enhanced)
[ 2.822625] mlx5_core 0000:01:00.1: enabling device (0000 → 0002)
[ 2.822787] mlx5_core 0000:01:00.1: firmware version: 28.45.4028
[ 2.822809] mlx5_core 0000:01:00.1: 126.028 Gb/s available PCIe bandwidth (32.0 GT/s PCIe x4 link)
[ 3.194074] mlx5_core 0000:01:00.1: Rate limit: 127 rates are supported, range: 0Mbps to 195312Mbps
[ 3.194634] mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048)
[ 3.204484] mlx5_core 0000:01:00.1: Flow counters bulk query buffer size increased, bulk_query_len(8)
[ 3.212097] mlx5_core 0000:01:00.1: Port module event: module 1, Cable unplugged
[ 3.213168] mlx5_core 0000:01:00.1: mlx5_pcie_event:326:(pid 369): Detected insufficient power on the PCIe slot (27W).
[ 3.223788] mlx5_core 0000:01:00.1: mlx5e: IPSec ESP acceleration enabled
[ 3.365281] mlx5_core 0000:01:00.1: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 enhanced)
[ 3.372141] mlx5_core 0002:01:00.0: enabling device (0000 → 0002)
[ 3.372296] mlx5_core 0002:01:00.0: firmware version: 28.45.4028
[ 3.372319] mlx5_core 0002:01:00.0: 126.028 Gb/s available PCIe bandwidth (32.0 GT/s PCIe x4 link)
[ 3.734243] mlx5_core 0002:01:00.0: Rate limit: 127 rates are supported, range: 0Mbps to 195312Mbps
[ 3.734788] mlx5_core 0002:01:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048)
[ 3.739392] mlx5_core 0002:01:00.0: Flow counters bulk query buffer size increased, bulk_query_len(8)
[ 3.754158] mlx5_core 0002:01:00.0: Port module event: module 0, Cable unplugged
[ 3.754861] mlx5_core 0002:01:00.0: mlx5_pcie_event:326:(pid 369): Detected insufficient power on the PCIe slot (27W).
[ 3.758546] mlx5_core 0002:01:00.0: mlx5e: IPSec ESP acceleration enabled
[ 3.887558] mlx5_core 0002:01:00.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 enhanced)
[ 3.895416] mlx5_core 0002:01:00.1: enabling device (0000 → 0002)
[ 3.895568] mlx5_core 0002:01:00.1: firmware version: 28.45.4028
[ 3.895595] mlx5_core 0002:01:00.1: 126.028 Gb/s available PCIe bandwidth (32.0 GT/s PCIe x4 link)
[ 4.264956] mlx5_core 0002:01:00.1: Rate limit: 127 rates are supported, range: 0Mbps to 195312Mbps
[ 4.266448] mlx5_core 0002:01:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048)
[ 4.272668] mlx5_core 0002:01:00.1: Flow counters bulk query buffer size increased, bulk_query_len(8)
[ 4.280907] mlx5_core 0002:01:00.1: Port module event: module 1, Cable unplugged
[ 4.281472] mlx5_core 0002:01:00.1: mlx5_pcie_event:326:(pid 390): Detected insufficient power on the PCIe slot (27W).
[ 4.296750] mlx5_core 0002:01:00.1: mlx5e: IPSec ESP acceleration enabled
[ 4.461172] mlx5_core 0002:01:00.1: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 enhanced)
[ 4.467125] mlx5_core 0002:01:00.1 enP2p1s0f1np1: renamed from eth3
[ 4.467712] mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0
[ 4.468259] mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1
[ 4.468834] mlx5_core 0002:01:00.0 enP2p1s0f0np0: renamed from eth2
[ 6.816664] MST:: : mst_init 1715: Mellanox Technologies Software Tools Driver - version 2.0.0
[ 7.344844] mlx5_core 0000:01:00.0: E-Switch: Unload vfs: mode(LEGACY), nvfs(0), necvfs(0), active vports(0)
[ 7.351515] mlx5_core 0000:01:00.0: E-Switch: Disable: mode(LEGACY), nvfs(0), necvfs(0), active vports(0)
[ 8.817303] mlx5_core 0002:01:00.0 enP2p1s0f0np0: Link down
[ 9.138583] mlx5_core 0002:01:00.1 enP2p1s0f1np1: Link down
[ 10.708604] mlx5_core 0000:01:00.0: E-Switch: Disable: mode(LEGACY), nvfs(0), necvfs(0), active vports(0)
[ 10.945772] mlx5_core 0000:01:00.0: E-Switch: cleanup
[ 11.277203] mlx5_core 0000:01:00.1: E-Switch: Unload vfs: mode(LEGACY), nvfs(0), necvfs(0), active vports(0)
[ 11.289505] mlx5_core 0000:01:00.1: E-Switch: Disable: mode(LEGACY), nvfs(0), necvfs(0), active vports(0)
[ 11.633149] mlx5_core 0000:01:00.1 enp1s0f1np1: Link down
[ 13.964592] mlx5_core 0000:01:00.1: E-Switch: Disable: mode(LEGACY), nvfs(0), necvfs(0), active vports(0)
[ 14.616989] mlx5_core 0000:01:00.1: E-Switch: cleanup
[ 14.943180] mlx5_core 0002:01:00.0: E-Switch: Unload vfs: mode(LEGACY), nvfs(0), necvfs(0), active vports(0)
[ 14.965502] mlx5_core 0002:01:00.0: E-Switch: Disable: mode(LEGACY), nvfs(0), necvfs(0), active vports(0)
[ 19.536583] mlx5_core 0002:01:00.0: E-Switch: Disable: mode(LEGACY), nvfs(0), necvfs(0), active vports(0)
[ 20.182841] mlx5_core 0002:01:00.0: E-Switch: cleanup
[ 20.498895] mlx5_core 0002:01:00.1: E-Switch: Unload vfs: mode(LEGACY), nvfs(0), necvfs(0), active vports(0)
[ 20.517502] mlx5_core 0002:01:00.1: E-Switch: Disable: mode(LEGACY), nvfs(0), necvfs(0), active vports(0)
[ 24.217592] mlx5_core 0002:01:00.1: E-Switch: Disable: mode(LEGACY), nvfs(0), necvfs(0), active vports(0)
[ 24.852664] mlx5_core 0002:01:00.1: E-Switch: cleanup

That sounds like a very different issue than mine! The ConnectX-7 in my case doesn’t even appear in the PCI device tree, so the driver does not load, there’s no NIC presented to the OS. The dmesg logs in my case report a broken device and failed retraining.
It definitely seems like you have a problem but it does not seem like the same one as mine.

We ran into the same thing on one of our 9 or 10 GB10 boxes. What fixed it was a 1 minute plus power off. We also have tried three different DACs on this machine.

Since it was running in an 8x GB10 cluster, we started everything again, and about six hours in, the CX-7 did the same disappearing act. Interesting that it is only that node. We also swapped in a 9th node, and just replaced it in the cluster and it is not happening on that one.

Hi Patrick, engineering would to review this unit.

Is this a DGX Spark Founder’s Edition? if so, please run NVIDIA DGX Spark Field Diagnostics | NVIDIA, DM me the logs and discuss RMA options so we can get you a replacement and eng can start looking at this symptom.

Hi - this is a Gigabyte AI TOP ATOM unit. Since they are the same ODM as the Lenovo, and we actually swapped in a Lenovo PGX which is working. We did not have another Gigabyte to swap it with so that was the closest we had.

I can throw it in my bag for GTC next week and get in Sunday if someone is in SJ.

It is running as the back-end for a video we are filming later today, so I am happy to take it offline and bring it, but I will likely need it back soon-ish.

Mine is - can I take you up on that same offer as the originator of this issue?

Communicated on DM. Please proceed with RMA.

I have the same issue. It happened after a boot drive corruption and firmware/OS recovery. An E-Switch event is seen in dmesg 10 seconds into the boot up that removes the connectx-7s from the PCIe list. Things tried:

  • Removed USB-C power for 15 min
  • BIOS Default restore
  • Firmware recovery twice
  • Field Diags - All passed
    Is there any other check or is this a hardware issue?

[ 4.877340] mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1
[ 4.877574] mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0
[ 4.878015] mlx5_core 0002:01:00.0 enP2p1s0f0np0: renamed from eth2
[ 4.878661] mlx5_core 0002:01:00.1 enP2p1s0f1np1: renamed from eth3
[ 6.767004] mlx5_core 0000:01:00.0: E-Switch: Unload vfs: mode(LEGACY), nvfs(0), necvfs(0), active vports(0)
[ 6.776891] mlx5_core 0000:01:00.0: E-Switch: Disable: mode(LEGACY), nvfs(0), necvfs(0), active vports(0)
[ 8.106289] mlx5_core 0002:01:00.0 enP2p1s0f0np0: Link down
[ 8.418865] mlx5_core 0002:01:00.1 enP2p1s0f1np1: Link down
[ 9.942860] mlx5_core 0000:01:00.0: E-Switch: Disable: mode(LEGACY), nvfs(0), necvfs(0), active vports(0)
[ 10.329388] mlx5_core 0000:01:00.0: E-Switch: cleanup
[ 10.666411] mlx5_core 0000:01:00.1: E-Switch: Unload vfs: mode(LEGACY), nvfs(0), necvfs(0), active vports(0)
[ 10.680968] mlx5_core 0000:01:00.1: E-Switch: Disable: mode(LEGACY), nvfs(0), necvfs(0), active vports(0)
[ 10.915214] mlx5_core 0000:01:00.1 enp1s0f1np1: Link down
[ 13.222947] mlx5_core 0000:01:00.1: E-Switch: Disable: mode(LEGACY), nvfs(0), necvfs(0), active vports(0)
[ 13.888357] mlx5_core 0000:01:00.1: E-Switch: cleanup

@blake45 the CX-7 ports are hot pluggable now. If you don’t have a cable plugged in the ports will be disabled. The E-Switch entries are from that process.

If the ports disappear when the cable is present then it’s an issue!

Sample journal entries with no cable:

spark1 kernel: mlx5_core 0000:01:00.0: enabling device (0000 -> 0002)
spark1 kernel: mlx5_core 0000:01:00.0: firmware version: 28.45.4028
spark1 kernel: mlx5_core 0000:01:00.0: 126.028 Gb/s available PCIe bandwidth (32.0 GT/s PCIe x4 link)
spark1 kernel: mlx5_core 0000:01:00.0: Rate limit: 127 rates are supported, range: 0Mbps to 195312Mbps
spark1 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048)
spark1 kernel: mlx5_core 0000:01:00.0: Flow counters bulk query buffer size increased, bulk_query_len(8)
spark1 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable unplugged
spark1 kernel: mlx5_core 0000:01:00.0: mlx5_pcie_event:326:(pid 12): Detected insufficient power on the PCIe slot (27W).
spark1 kernel: mlx5_core 0000:01:00.0: mlx5e: IPSec ESP acceleration enabled
spark1 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 enhanced)
spark1 kernel: mlx5_core 0000:01:00.1: enabling device (0000 -> 0002)
spark1 kernel: mlx5_core 0000:01:00.1: firmware version: 28.45.4028
spark1 kernel: mlx5_core 0000:01:00.1: 126.028 Gb/s available PCIe bandwidth (32.0 GT/s PCIe x4 link)
spark1 kernel: mlx5_core 0000:01:00.1: Rate limit: 127 rates are supported, range: 0Mbps to 195312Mbps
spark1 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048)
spark1 kernel: mlx5_core 0000:01:00.1: Flow counters bulk query buffer size increased, bulk_query_len(8)
spark1 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable unplugged
spark1 kernel: mlx5_core 0000:01:00.1: mlx5_pcie_event:326:(pid 374): Detected insufficient power on the PCIe slot (27W).
spark1 kernel: mlx5_core 0000:01:00.1: mlx5e: IPSec ESP acceleration enabled
spark1 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 enhanced)
spark1 kernel: mlx5_core 0002:01:00.0: enabling device (0000 -> 0002)
spark1 kernel: mlx5_core 0002:01:00.0: firmware version: 28.45.4028
spark1 kernel: mlx5_core 0002:01:00.0: 126.028 Gb/s available PCIe bandwidth (32.0 GT/s PCIe x4 link)
spark1 kernel: mlx5_core 0002:01:00.0: Rate limit: 127 rates are supported, range: 0Mbps to 195312Mbps
spark1 kernel: mlx5_core 0002:01:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048)
spark1 kernel: mlx5_core 0002:01:00.0: Flow counters bulk query buffer size increased, bulk_query_len(8)
spark1 kernel: mlx5_core 0002:01:00.0: Port module event: module 0, Cable unplugged
spark1 kernel: mlx5_core 0002:01:00.0: mlx5_pcie_event:326:(pid 12): Detected insufficient power on the PCIe slot (27W).
spark1 kernel: mlx5_core 0002:01:00.0: mlx5e: IPSec ESP acceleration enabled
spark1 kernel: mlx5_core 0002:01:00.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 enhanced)
spark1 kernel: mlx5_core 0002:01:00.1: enabling device (0000 -> 0002)
spark1 kernel: mlx5_core 0002:01:00.1: firmware version: 28.45.4028
spark1 kernel: mlx5_core 0002:01:00.1: 126.028 Gb/s available PCIe bandwidth (32.0 GT/s PCIe x4 link)
spark1 kernel: mlx5_core 0002:01:00.1: Rate limit: 127 rates are supported, range: 0Mbps to 195312Mbps
spark1 kernel: mlx5_core 0002:01:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048)
spark1 kernel: mlx5_core 0002:01:00.1: Flow counters bulk query buffer size increased, bulk_query_len(8)
spark1 kernel: mlx5_core 0002:01:00.1: Port module event: module 1, Cable unplugged
spark1 kernel: mlx5_core 0002:01:00.1: mlx5_pcie_event:326:(pid 374): Detected insufficient power on the PCIe slot (27W).
spark1 kernel: mlx5_core 0002:01:00.1: mlx5e: IPSec ESP acceleration enabled
spark1 kernel: mlx5_core 0002:01:00.1: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 enhanced)
spark1 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1
spark1 kernel: mlx5_core 0002:01:00.0 enP2p1s0f0np0: renamed from eth2
spark1 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0
spark1 kernel: mlx5_core 0002:01:00.1 enP2p1s0f1np1: renamed from eth3
spark1 kernel: mlx5_core 0000:01:00.0: E-Switch: Unload vfs: mode(LEGACY), nvfs(0), necvfs(0), active vports(0)
spark1 kernel: mlx5_core 0000:01:00.0: E-Switch: Disable: mode(LEGACY), nvfs(0), necvfs(0), active vports(0)
spark1 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link down
spark1 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link down
spark1 kernel: mlx5_core 0002:01:00.0 enP2p1s0f0np0: Link down
spark1 kernel: mlx5_core 0002:01:00.1 enP2p1s0f1np1: Link down
spark1 kernel: mlx5_core 0000:01:00.0: E-Switch: Disable: mode(LEGACY), nvfs(0), necvfs(0), active vports(0)
spark1 kernel: mlx5_core 0000:01:00.0: E-Switch: cleanup
spark1 NetworkManager[1569]: <info>  [1776179117.9918] device (enp1s0f0np0): driver 'mlx5_core' does not support carrier detection.
spark1 kernel: mlx5_core 0000:01:00.1: E-Switch: Unload vfs: mode(LEGACY), nvfs(0), necvfs(0), active vports(0)
spark1 kernel: mlx5_core 0000:01:00.1: E-Switch: Disable: mode(LEGACY), nvfs(0), necvfs(0), active vports(0)
spark1 kernel: mlx5_core 0000:01:00.1: E-Switch: Disable: mode(LEGACY), nvfs(0), necvfs(0), active vports(0)
spark1 kernel: mlx5_core 0000:01:00.1: E-Switch: cleanup
spark1 kernel: mlx5_core 0002:01:00.0: E-Switch: Unload vfs: mode(LEGACY), nvfs(0), necvfs(0), active vports(0)
spark1 kernel: mlx5_core 0002:01:00.0: E-Switch: Disable: mode(LEGACY), nvfs(0), necvfs(0), active vports(0)
spark1 kernel: mlx5_core 0002:01:00.0: E-Switch: Disable: mode(LEGACY), nvfs(0), necvfs(0), active vports(0)
spark1 kernel: mlx5_core 0002:01:00.0: E-Switch: cleanup
spark1 kernel: mlx5_core 0002:01:00.1: E-Switch: Unload vfs: mode(LEGACY), nvfs(0), necvfs(0), active vports(0)
spark1 kernel: mlx5_core 0002:01:00.1: E-Switch: Disable: mode(LEGACY), nvfs(0), necvfs(0), active vports(0)
spark1 kernel: mlx5_core 0002:01:00.1: E-Switch: Disable: mode(LEGACY), nvfs(0), necvfs(0), active vports(0)
spark1 kernel: mlx5_core 0002:01:00.1: E-Switch: cleanup