Should I buy ASUS GX10 instead Nvidia DGX Spark?

Given ASUS GX10 uses the same GPU as Nvidia DGX Spark, what’s major difference between ASUS’s version vs Nvidia’s version? I know storage is different (1TB vs 4TB)

For regular home AI lab usage, should I wait for ASUS GX10 which is 1000$ cheaper than Nvidia DGX Spark?

2 Likes

I’m also thinking the same. It’s hard to justify the 1K$ extra, for 3tb of storage

1 Like

Well the software may take up several hundred? GBs. Then the models will take up more space. I have 2TB on my laptop and 1TB is almost filled. And I usually only keep one or two active models on disk and they’re relatively small. Granted I also have other software on there too, not just for AI. But software is humongous now. I sort of wish I had reserved the spark proper instead of ascent. Just my 2 cents.

I could insert external USB drive to store additional data, as long as most regular work can be done under 1TB internal storage.

Are both the Spark and GX10 arm64 cpus? If they are that would mean RAPIDS won’t run (w/GPU) in them. I don’t know if that’s a factor in your decision but it was in mine.

All OEM products using the same SOC as the Founders Edition.

The GB10 Superchip is a system-on-a-chip (SoC) based on the NVIDIA Grace Blackwell architecture and delivers up to 1 petaflop of AI performance at FP4 precision.

GB10 features an NVIDIA Blackwell GPU with latest-generation CUDA® cores and fifth-generation Tensor Cores, connected via NVLink®-C2C chip-to-chip interconnect to a high-performance NVIDIA Grace™ CPU, which includes 20 power-efficient cores built with the Arm architecture. MediaTek, a market leader in Arm-based SoC designs, collaborated on the design of GB10, contributing to its best-in-class power efficiency, performance and connectivity.

And I would expect its always the same mainboard. At least the backsides look in terms of connectors identical. Only housing and cooling are different. And it is said that ASUS Ascent comes with a bigger power supply than the Founders Edition.

1 Like

Aye. 1 TB should be sufficient for most use cases. I don’t need all models at once. Just connect a external NVMe via USB-C. With a 10 GbE regular NIC you could also use network storage as an alternative.

I wonder if the NVMe that comes with the system is soldered or has a regular M.2 connector. 🧐

(hf-cli) ubuntu@xlarge:~$ hf cache scan
REPO ID                                                REPO TYPE SIZE ON DISK NB FILES LAST_ACCESSED LAST_MODIFIED REFS LOCAL PATH
------------------------------------------------------ --------- ------------ -------- ------------- ------------- ---- -------------------------------------------------------------------------------
DevQuasar/Qwen.Qwen3-Next-80B-A3B-Instruct-FP8-Dynamic model            80.5G       30 5 weeks ago   5 weeks ago   main /data/hf/models/models--DevQuasar--Qwen.Qwen3-Next-80B-A3B-Instruct-FP8-Dynamic
Qwen/Qwen3-1.7B                                        model             4.1G       12 5 weeks ago   5 weeks ago   main /data/hf/models/models--Qwen--Qwen3-1.7B
Qwen/Qwen3-Next-80B-A3B-Instruct-FP8                   model            82.1G       18 2 hours ago   2 hours ago   main /data/hf/models/models--Qwen--Qwen3-Next-80B-A3B-Instruct-FP8
RedHatAI/Llama-3.3-70B-Instruct-FP8-dynamic            model            72.7G       24 5 weeks ago   5 weeks ago   main /data/hf/models/models--RedHatAI--Llama-3.3-70B-Instruct-FP8-dynamic
cpatonn/GLM-4.5-Air-AWQ-4bit                           model            63.4G       23 2 hours ago   5 weeks ago   main /data/hf/models/models--cpatonn--GLM-4.5-Air-AWQ-4bit
cpatonn/Qwen3-Next-80B-A3B-Instruct-AWQ-4bit           model            47.6G       23 5 weeks ago   5 weeks ago   main /data/hf/models/models--cpatonn--Qwen3-Next-80B-A3B-Instruct-AWQ-4bit
jeffcookio/Mistral-Small-3.2-24B-Instruct-2506-awq-sym model            15.1G       17 3 weeks ago   3 weeks ago   main /data/hf/models/models--jeffcookio--Mistral-Small-3.2-24B-Instruct-2506-awq-sym
openai/gpt-oss-120b                                    model           130.5G       33 3 days ago    5 weeks ago   main /data/hf/models/models--openai--gpt-oss-120b

Done in 0.0s. Scanned 8 repo(s) for a total of 496.0G.
Got 1 warning(s) while scanning. Use -vvv to print details.

This the space I use on a dev machine (w/ H100 94GB NVL) I’m working on in our DC. ~500 GB for quants mainly. OS + apps around 95 GB. Not several hundreds. 😉

1 Like

It is located on the bottom under the round padded foot.

1 Like

Very nice!

UPDATE: checked with the RAPIDS forum. RAPIDS container will work with arm64 - https://hub.docker.com/r/rapidsai/base

I have the same question as well.
I ordered the DGX Spark and it is expected to be delivered today/tomorrow.
I am trying to figure if it is cheaper for me to continue to use google colab Pro for $10 a month and buy additional tokens as I need and continue to use the A100 for my personal projects..

For my work, all our data is in AWS cloud, so I have to use p4d instance type when training the transformers..

Anyone else thinks, I am off here and should not return my DGX once I have it ?

Thanks…

What has your google colab Pro use cost so far? It really boils down to either long term cost savings or privacy, no? If the latter is most important, is it worth the $4K? If cost is more important, how long can you use Colab for $4K? I like the privacy aspect and anticipate good models getting smaller and more efficient so I think in my case, DGX has ‘legs’.

I am trying to be super diligent when using A100, so I am using CPU to make code functional and then i switch to A100 ( ofcourse the amp, cuda api’s cannot be tested in CPU) and disconnect and delete runtime when idle. I have implemented batch level checkpointing on the google drive, so I can load my model and resume from that, So far my cost has been $80.

Also just FYI, I am still going through the learning curve of transformer training from scratch, so its more expensive to be renting while making no real progress at times.

So with $4K upfront cost for DGX, and given it’s limited memory IO bandwidth, it will likely take me a few years to break even.. and by that time we could see newer iterations of the HW with better price to compute ratio.

I got my Asus GX10 delivered today. The machines does not boot. It shuts down right after the ASUS is logo is displayed on the screen. Has any one else faced this issue ?

I have a 1TB Asus Ascent GX10 (acquired it a few weeks ago).

In terms of compute and memory bandwidth (tested using llama.cpp) appears to be identical to the regular DGX Spark.

# commit 5acd455
./llama-bench -m ./gpt-oss-20b-mxfp4.gguf -fa 1 -d 0,4096,8192,16384,32768 -p 2048 -n 32 -ub 2048 

ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GB10, compute capability 12.1, VMM: yes
| model                          |       size |     params | backend    | ngl | n_ubatch | fa |            test |                  t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | -------: | -: | --------------: | -------------------: |
| gpt-oss 20B MXFP4 MoE          |  11.27 GiB |    20.91 B | CUDA       |  99 |     2048 |  1 |          pp2048 |       3639.61 ± 9.49 |
| gpt-oss 20B MXFP4 MoE          |  11.27 GiB |    20.91 B | CUDA       |  99 |     2048 |  1 |            tg32 |         81.04 ± 0.49 |
| gpt-oss 20B MXFP4 MoE          |  11.27 GiB |    20.91 B | CUDA       |  99 |     2048 |  1 |  pp2048 @ d4096 |       3382.30 ± 6.68 |
| gpt-oss 20B MXFP4 MoE          |  11.27 GiB |    20.91 B | CUDA       |  99 |     2048 |  1 |    tg32 @ d4096 |         74.66 ± 0.94 |
| gpt-oss 20B MXFP4 MoE          |  11.27 GiB |    20.91 B | CUDA       |  99 |     2048 |  1 |  pp2048 @ d8192 |      3140.84 ± 15.23 |
| gpt-oss 20B MXFP4 MoE          |  11.27 GiB |    20.91 B | CUDA       |  99 |     2048 |  1 |    tg32 @ d8192 |         69.63 ± 2.31 |
| gpt-oss 20B MXFP4 MoE          |  11.27 GiB |    20.91 B | CUDA       |  99 |     2048 |  1 | pp2048 @ d16384 |       2657.65 ± 6.55 |
| gpt-oss 20B MXFP4 MoE          |  11.27 GiB |    20.91 B | CUDA       |  99 |     2048 |  1 |   tg32 @ d16384 |         65.39 ± 0.07 |
| gpt-oss 20B MXFP4 MoE          |  11.27 GiB |    20.91 B | CUDA       |  99 |     2048 |  1 | pp2048 @ d32768 |       2032.37 ± 9.45 |
| gpt-oss 20B MXFP4 MoE          |  11.27 GiB |    20.91 B | CUDA       |  99 |     2048 |  1 |   tg32 @ d32768 |         57.06 ± 0.08 |

The NVMe is rated for PCIe 4.0 so do note it will likely be slower than the Founder’s Edition equivalent, on top of being smaller in capacity. The disk used appears to be Phison according to information reported from nvme-cli, but is a unknown model (ESL01TBTLCZ-27J2-TYN). fio reports around 3.5~4.0GB/s sequential read throughput using 1MB block size.

[global]
ioengine=libaio
direct=1
size=10G
directory=./fio_test
runtime=30
time_based
group_reporting
filename=testfile

[sequential-read]
description=Sequential Read Test (128K blocks)
rw=read
bs=1m
numjobs=1
stonewa

fio disk_profile_1mb.fio
sequential-read: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1
fio-3.36
Starting 1 process
sequential-read: Laying out IO file (1 file / 10240MiB)
Jobs: 1 (f=1): [R(1)][100.0%][r=3959MiB/s][r=3959 IOPS][eta 00m:00s]
sequential-read: (groupid=0, jobs=1): err= 0: pid=3662: Sun Nov  2 11:23:26 2025
  Description  : [Sequential Read Test (128K blocks)]
  read: IOPS=3778, BW=3779MiB/s (3962MB/s)(111GiB/30001msec)
    slat (usec): min=22, max=359, avg=25.81, stdev= 4.77
    clat (usec): min=186, max=1566, avg=237.80, stdev=43.41
     lat (usec): min=235, max=1788, avg=263.61, stdev=44.15
    clat percentiles (usec):
     |  1.00th=[  212],  5.00th=[  215], 10.00th=[  217], 20.00th=[  219],
     | 30.00th=[  219], 40.00th=[  221], 50.00th=[  223], 60.00th=[  229],
     | 70.00th=[  239], 80.00th=[  243], 90.00th=[  297], 95.00th=[  310],
     | 99.00th=[  392], 99.50th=[  545], 99.90th=[  652], 99.95th=[  750],
     | 99.99th=[ 1037]
   bw (  MiB/s): min= 2425, max= 4063, per=100.00%, avg=3787.71, stdev=311.29, samples=59
   iops        : min= 2425, max= 4063, avg=3787.54, stdev=311.33, samples=59
  lat (usec)   : 250=85.74%, 500=13.64%, 750=0.57%, 1000=0.04%
  lat (msec)   : 2=0.01%
  cpu          : usr=0.20%, sys=12.34%, ctx=113377, majf=0, minf=267
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=113366,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=3779MiB/s (3962MB/s), 3779MiB/s-3779MiB/s (3962MB/s-3962MB/s), io=111GiB (119GB), run=30001-30001msec

Disk stats (read/write):
  nvme0n1: ios=902260/101, sectors=230978560/1600, merge=0/95, ticks=154508/28, in_queue=154536, util=87.74%

Software is mostly identical to DGX Spark except for some minor alterations by Asus (e.g. wallpaper).

The power adapter is 240W (PD3.1), so nothing overwhelmingly special about it.

2 Likes

Tried new power sockets still not working. Not sure if it is anything to do with wattage of the power outlet. Any help ?

Have you made sure it clicks all the way in?
Make sure the power cord is plugged in securely in the USB port that’s next to the power button.
Also check the 3-prong plug next to the block it needs a gentle lock in.

Spoke to Asus service center. Seems like a faulty unit. They just asked to return the machine. Not great after waiting for months on but it’s what it is.

I just got mine. Runs fine. Dealing with my Linux ignorance is the main issue. No power issues. Sounds like you just got a bad one. It happens.

Yes.. looks I just got a bad unit. Unfortunately they don’t have an option to replace .. it’s just return. Looks like I will have to wait for it to be available at retailers now.