Wiping DGX Spark and new install, list of repos to install

Hi, I wiped my DGX Spark and re-installed the recovery image from ISO.

Can you please help list out all the drivers and software I need to install to get back to original install please. Currently the libraries are using only the 20-core CPU and isn’t using GPU. 😅

Can you please help list all the software is sequence please? I could not find a thread where this is listed in sequence of install specifically for DGX Spark.

Please execute:

sudo apt update
sudo apt dist-upgrade
sudo fwupdmgr refresh
sudo fwupdmgr upgrade
sudo reboot

From the command line.

Driver updates are delivered from the DGX Spark update process. You should not need to update the driver.
Latest driver version is: 580.95.05

Can you share the output from

sudo cat /etc/fastos-release 

sudo cat /etc/dgx-release

You can check what updates are available: sudo apt list --upgradable

Thanks a bunch for responding..

I was vibe coding an AI project. The one of the AI models suggested I use “–break-system-packages” to install some packages. I didn’t pay attention as I was installing them. It was using CUDA 12.x libraries to code it up, and I thought it broke some stuff. I thought a reinstall would help fix things fast, instead of identifying each library to undo the damage.

Here is the output,

>sudo cat /etc/fastos-release

NAME=“DGX SPARK FASTOS”

DATE=“2025-10-10T03:26:29+00:00”

VERSION=“1.91.51”

BUILD_TYPE=“customer”

>sudo cat /etc/dgx-release

DGX_NAME=“DGX Spark”

DGX_PRETTY_NAME=“NVIDIA DGX Spark”

DGX_SWBUILD_DATE=“2025-10-04-06-28-28”

DGX_SWBUILD_VERSION=“7.2.3”

DGX_COMMIT_ID=“03dc741”

DGX_PLATFORM=“DGX Server for KVM”

DGX_SERIAL_NUMBER=“aaaaaaaaaaaa”

DGX_OTA_VERSION=“7.3.1”

DGX_OTA_DATE=“Wed Nov 19 04:36:05 UTC 2025”

>sudo apt list --upgradable

libnautilus-extension4/noble-updates 1:46.4-0ubuntu0.2 arm64 [upgradable from: 1:46.4-0ubuntu0.1]

nautilus-data/noble-updates 1:46.4-0ubuntu0.2 all [upgradable from: 1:46.4-0ubuntu0.1]

nautilus/noble-updates 1:46.4-0ubuntu0.2 arm64 [upgradable from: 1:46.4-0ubuntu0.1]

ubuntu-advantage-tools/noble-updates 37.1ubuntu0~24.04 all [upgradable from: 36ubuntu0~24.04]

ubuntu-pro-client-l10n/noble-updates 37.1ubuntu0~24.04 arm64 [upgradable from: 36ubuntu0~24.04]

ubuntu-pro-client/noble-updates 37.1ubuntu0~24.04 arm64 [upgradable from: 36ubuntu0~24.04]

nvidia-smi output

Wed Nov 26 02:54:42 2025

±----------------------------------------------------------------------------------------+

| NVIDIA-SMI 580.95.05 Driver Version: 580.95.05 CUDA Version: 13.0 |

±----------------------------------------±-----------------------±---------------------+

| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |

| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |

| | | MIG M. |

|=========================================+========================+======================|

| 0 NVIDIA GB10 On | 0000000F:01:00.0 Off | N/A |

| N/A 42C P0 12W / N/A | Not Supported | 0% Default |

| | | N/A |

±----------------------------------------±-----------------------±---------------------+

±----------------------------------------------------------------------------------------+

| Processes: |

| GPU GI CI PID Type Process name GPU Memory |

| ID ID Usage |

|=========================================================================================|

| 0 N/A N/A 2944 C /usr/local/bin/python3 274MiB |

| 0 N/A N/A 21196 G /usr/lib/xorg/Xorg 126MiB |

| 0 N/A N/A 21281 G /usr/bin/gnome-shell 18MiB |

±----------------------------------------------------------------------------------------+

You seem to have latest packages. Check out this discussion: Effective PyTorch and CUDA

I found the issue! It uses the GPU when I freshly start the server. However if I let the computer sleep and come back, that’s when it stops responding in GPU utilization. Guess I need to load the model again if I step away. The CUDA context is getting corrupted I guess.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.