PhysX GPU Not Working on DGX Spark (GB10 Blackwell, ARM64) - Isaac Sim 5.1.0

Hi everyone,

I’m trying to get PhysX GPU acceleration working on the new NVIDIA DGX Spark (ARM64 Grace CPU + GB10 Blackwell GPU) with Isaac Sim built from source, but it seems to be stuck in CPU-only mode despite all my attempts to enable it.

Setup

  • Platform: NVIDIA DGX Spark
  • CPU: ARM Grace (20 cores, aarch64)
  • GPU: NVIDIA GB10 Blackwell (sm_121, 93GB VRAM)
  • OS: Ubuntu 24.04.4 LTS
  • Driver: 580.126.09
  • CUDA: 13.0
  • Isaac Sim: 5.1.0-rc.19 (built from source with ./build.sh)
  • PhysX SDK: 5.6.1.f9c67de2-release-107.3-linux-aarch64

The Problem

When running physics simulations (even simple falling boxes), my GPU shows 0% utilization and only 9W power draw. It’s clearly using CPU-only PhysX. I’m running benchmarks for my internship at TU/e Supercomputing Centre and need to compare Isaac Sim performance across different HPC systems.

What I’ve Tried

1. Explicitly enabling GPU physics in code:

settings = carb.settings.get_settings()
settings.set("/physics/cudaDevice", 0)
settings.set("/physics/cudaEnabled", True)
settings.set("/physics/gpuDynamics", True)
settings.set("/physics/gpuCollision", True)
settings.set("/physics/broadPhaseType", "GPU")

The settings report back as enabled (True), but GPU still shows 0% usage.

2. Found PhysX GPU libraries in packman cache:

~/.cache/packman/chk/physxsdk/5.6.1.f9c67de2-release-107.3-linux-aarch64/bin/linux.aarch64/checked/libPhysXGpu_64.so (386 MB)
~/.cache/packman/chk/physxsdk/5.6.1.f9c67de2-release-107.3-linux-aarch64/bin/linux.aarch64/debug/libPhysXGpu_64.so (586 MB)

3. Created symlink to release directory and set LD_LIBRARY_PATH:

mkdir -p ~/.cache/packman/chk/physxsdk/5.6.1.f9c67de2-release-107.3-linux-aarch64/bin/linux.aarch64/release/
ln -s ../checked/libPhysXGpu_64.so ~/.cache/packman/chk/physxsdk/.../release/libPhysXGpu_64.so
export LD_LIBRARY_PATH=~/.cache/packman/chk/physxsdk/.../release:$LD_LIBRARY_PATH

4. Verified with nvidia-smi during simulation:

GPU Util: 0%
Power: 9W (idle)
Clock: 2411 MHz (not boosting)

Benchmark Results

Performance is identical whether I enable GPU physics or not:

Rigid Bodies FPS Real-Time Factor GPU Usage
500 boxes 85 1.42x 0% ❌
1000 boxes 37 0.62x 0% ❌
2000 boxes 15 0.25x 0% ❌

This suggests PhysX is falling back to CPU regardless of settings.

My Questions

  1. Does PhysX GPU support GB10 Blackwell (sm_121) on ARM64? The GB10 has compute capability 12.1, which might be too new for PhysX SDK 5.6.1?

  2. Is GPU physics expected to work on DGX Spark? Or is ARM64 + Blackwell currently unsupported?

  3. Am I missing something obvious? Is there a different way to enable GPU physics on ARM64 vs x86_64?

  4. Any workarounds? Should I wait for a newer PhysX SDK or Isaac Sim release?

Why This Matters

I’m benchmarking the DGX Spark against a other AI supercomputer and need to establish fair Isaac Sim performance metrics. If GPU physics doesn’t work on DGX Spark, I need to know if it’s a configuration issue or a platform limitation so I can adjust my benchmarking approach.

Any help would be greatly appreciated! Has anyone successfully enabled PhysX GPU on DGX Spark or similar ARM64 Blackwell systems?

Thanks in advance!


System Info:

  • DGX Spark (Grace + GB10)
  • Isaac Sim 5.1.0-rc.19 source build
  • PhysX SDK 5.6.1 (ARM64)
  • Driver 580.126.09, CUDA 13.0

Hi @thijn_bakker5, thank you for raising this issue. I’m looking into this now.

Hi @thijn_bakker5, would you be able to try the benchmark on Isaac Sim 6.0.0? I was able to build and run the tutorial example on 6.0.0 and see the GB10 spike in usage.
Steps:

  1. In Isaac Sim repo checkout branch v6.0.0-dev
  2. run ./build -x
  3. go into _build/linux-aarch64/release
  4. run ./python.sh standalone_examples/tutorials/getting_started_robot.py
  5. As the script runs check nvidia-smi

UPDATE: GPU Physics Not Working - Confirmed on Both Isaac Sim Versions

I’ve tested extensively and PhysX GPU does not work on DGX Spark, despite libraries being present and settings showing as enabled.

Test Results

Isaac Sim 5.1.0 and 6.0.0 (both tested):

  • PhysX GPU library exists: libPhysXGpu_64.so
  • Settings report: cudaEnabled: True, gpuDynamics: True
  • GPU utilization during physics: 0-3%
  • Power draw during physics: 9-12W (idle)

Rendering comparison (proves GPU works):

  • GUI/ray tracing: 80-95% GPU, 30-45W
  • Physics simulation: 0-3% GPU, 9-12W

Performance Evidence

Identical FPS whether GPU physics “enabled” or not:

Objects CPU-only GPU “enabled” GPU Used
500 89.7 FPS 85.0 FPS 0%
1000 39.2 FPS 37.2 FPS 0%

Questions

  1. Does PhysX GPU 5.6.1 support GB10 Blackwell (sm_121) on ARM64?
  2. Is this a known limitation or configuration issue?
  3. What’s the timeline for GB10/ARM64 support if not currently available?

System: DGX Spark (ARM Grace + GB10)
PhysX SDK: 5.6.1-linux-aarch64
Impact: Physics limited to single-threaded CPU on 20-core system

Any guidance appreciated!

有人明白这种情况怎么处理吗

UPDATE: Tested Isaac Sim 6.0.0-dev - PhysX GPU Still Broken

Hi @michalin,

Thank you for the suggestion to test Isaac Sim 6.0.0-dev. Unfortunately, the PhysX GPU issue persists even in the latest development version.

Isaac Sim 6.0.0-dev Results (DGX Spark - GB10)

Same critical errors:

[Error] [omni.physx.plugin] PhysX error: The application needs to increase 
PxGpuDynamicsMemoryConfig::foundLostAggregatePairsCapacity to 1721, 
otherwise, the simulation will miss interactions
Found GPU0 NVIDIA GB10 which is of cuda capability 12.1.
Minimum and Maximum cuda capability supported by this version of PyTorch is (8.0) - (12.0)

Performance (10 robots, benchmark_robots_o3dyn.py):

  • Mean FPS: 31.4 FPS

  • Physics Frametime: 24.63 ms

  • GPU Utilization: 0-3% (no GPU activity during physics)

  • Real-time Factor: 0.518 (running at half speed)

Extended Testing: SPIKE-1 System (B200)

I also tested on a second Blackwell system to confirm this isn’t GB10-specific:

System: NVIDIA B200 (x86_64, 224 CPUs, 183GB VRAM)
Driver: 580.105.08
Isaac Sim: 5.0.0-rc.45

All three physics backends show identical CPU-only performance:

Physics Backend Mean FPS Physics Frametime GPU Usage
warp (GPU) 26.0 FPS 33.46 ms 0-3% ❌
torch (default) 26.1 FPS 33.30 ms 0-3% ❌
numpy (CPU) 27.0 FPS 31.97 ms 0-3% ❌

This confirms the issue affects all Blackwell GPUs (GB10 and B200), not just DGX Spark.

Key Findings

  1. GPU works for rendering: 510 FPS on B200, 87 FPS on GB10 (scene loading benchmark)

  2. GPU fails for physics: 0-3% utilization across all physics backends

  3. Isaac Sim 6.0.0-dev doesn’t fix it: Same errors and performance as 5.x

  4. Affects multiple Blackwell GPUs: GB10 (ARM64) and B200 (x86_64) both broken

Questions

  1. Is PhysX GPU support for Blackwell (sm_121) planned for a future Isaac Sim release?

  2. Is this a PhysX SDK limitation or an Isaac Sim integration issue?

  3. What’s the recommended workaround for Blackwell users? (Currently using CPU physics for benchmarking)

  4. Should this be filed as a formal bug report on GitHub?

Context

This is affecting my HPC benchmarking work. For now, I’m using CPU physics (--physics numpy) for fair cross-platform comparison, but this significantly limits performance on high-core-count systems.

I’ve prepared a detailed bug report with full reproduction steps if that would be helpful:

nvidia_physx_blackwell_bug_report.txt (7.8 KB)

Any guidance on timeline or workarounds would be greatly appreciated!

Hi @thijn_bakker5, I am still not able to repro the issue. Can you please try the following:

  1. Build Isaac Sim on 6.0.0-dev
  2. run .../_build/linux-aarch64/release/python.sh joint_continuous_rev.py (the script is attached.

With the following script running I see a 94% Volatile GPU Util. The script runs headless, so GPU usage is coming from PhysX.

joint_continuous_rev.py (3.1 KB)

1 Like

Hi, confirmed! Your script works perfectly on our end too, 94% GPU utilization as expected!

However, when running our benchmark script (benchmark_robots_o3dyn.py) GPU utilization drops back to 0–3%. After comparing the two scripts, we believe the issue is that SimulationManager.set_backend('torch') + set_physics_sim_device('cuda') only controls the tensor backend (how data is returned to Python), and does not actually enable PhysX GPU dynamics. The benchmark script never explicitly sets the GPU dynamics flags (/physics/gpuDynamics, /physics/cudaDevice), which World() appears to handle implicitly in your script.

Would you have any recommendations on the correct way to enable PhysX GPU dynamics explicitly when using SimulationManager and the omni.kit.app update loop instead of World?

Hi @michalin, following up on our earlier message, we need to correct our previous report.

We initially reported 94% GPU utilization and believed PhysX GPU dynamics were working. After running isolated benchmarks, we’ve confirmed that the 94% GPU utilization was entirely from rendering, not PhysX.

Our evidence (DGX Spark, GB10 sm_121, driver 580.126.09, Isaac Sim 6.0.0):

Mode render= GPU Util GPU Power Mean FPS Physics frametime
Physics only False 1.9% 10.8W 122 FPS 7.48ms
Combined True 94.9% 40.8W 58 FPS 16.43ms

With rendering disabled, GPU utilization drops to idle baseline during physics simulation. PhysX GPU dynamics are confirmed non-functional — all physics runs CPU-only regardless of backend (torch, numpy, warp) or settings (gpuDynamics, cudaDevice).

This is consistent across both our platforms:

  • DGX Spark (GB10, ARM64, sm_121, driver 580.126.09)

  • SPIKE-1 (B200, x86_64, sm_120, driver 580.105.08)

Our original question still stands: is there a way to explicitly enable PhysX GPU dynamics on Blackwell GPUs with the 580.x driver series? Or is a driver downgrade to 570.195.03 the only known fix? (Which is ofcourse not possible on the DGX Spark, because it won’t boot then)

Hi @thijn_bakker5,

Would it be possible to get access to the benchmark_robots_o3dyn.py script so we can reproduce this issue on our end?

Hi @thijn_bakker5,

Sorry, my earlier script had a bug and was actually rendering even though it was running headless. Here is another script that utilizes GPU Physics.
physx_gpu_blackwell.py (6.2 KB)

When running this script together with nvidia-smi dmon -s u -d 1 on a DGX Spark I get:

# gpu     sm    mem    enc    dec    jpg    ofa 
# Idx      %      %      %      %      %      % 
    0      0      0      0      0      0      0 
    0      0      0      0      0      0      0 
    0      0      0      0      0      0      0 
    0      0      0      0      0      0      0 
    0      2      0      0      0      0      0 
    0      1      0      0      0      0      0 
    0      1      0      0      0      0      0 
    0     42      0      0      0      0      0 
    0     42      0      0      0      0      0 
    0     42      0      0      0      0      0 
    0     42      0      0      0      0      0 
    0     43      0      0      0      0      0 
    0     42      0      0      0      0      0 
    0     43      0      0      0      0      0 
    0     42      0      0      0      0      0 
    0     43      0      0      0      0      0 
    0     43      0      0      0      0      0 
    0     43      0      0      0      0      0 
    0     10      0      0      0      0      0 
    0      0      0      0      0      0      0 
    0      0      0      0      0      0      0 
    0      0      0      0      0      0      0 
    0      0      0      0      0      0      0 
    0      0      0      0      0      0      0 

The 43% Physics Util happens when the scripts gets to simulate(). Can you try the script out?

UPDATE: PhysX GPU DOES work on Blackwell with driver 580.x — the issue is World() not enabling GPU dynamics

Thanks to @michalin from NVIDIA for providing the reproduction script that helped us identify the root cause. This is NOT a driver issue — it’s a configuration gap in how World() and Isaac Lab initialize the physics scene.

Root cause: World() does not set EnableGPUDynamicsAttr and BroadphaseTypeAttr on the PhysX scene prim via PhysxSchema. These must be set explicitly through the USD API:

python

from pxr import UsdPhysics, PhysxSchema

for prim in stage.Traverse():
    if prim.IsA(UsdPhysics.Scene):
        physx_api = PhysxSchema.PhysxSceneAPI.Apply(prim)
        physx_api.CreateEnableGPUDynamicsAttr().Set(True)
        physx_api.CreateBroadphaseTypeAttr().Set("GPU")

Setting carb settings (/physics/gpuDynamics, /physics/cudaDevice) alone is NOT sufficient. The PhysxSchema attributes on the scene prim are what actually controls whether the GPU solver runs.

Confirmed results on DGX Spark (GB10, sm_121, driver 580.126.09, Isaac Sim 5.1.0):

10,000 rigid body cubes, 10,000 simulation steps, headless, render=False:

CPU Physics GPU Physics Improvement
Avg FPS 344.5 542.0 +57%
Avg step time 2.90 ms 1.84 ms 1.58x faster
GPU Utilization 0% ~40% GPU actively solving

At 2,000 objects, CPU was actually faster (1608 FPS vs 876 FPS) due to GPU transfer overhead dominating a trivially simple workload. The GPU advantage appears at higher object counts where collision pair computation outweighs the transfer cost.

What was misleading in our earlier investigation: We initially concluded PhysX GPU was broken because our benchmark using World() showed 0% GPU utilization during physics. We then saw 95% GPU utilization with render=True and incorrectly attributed it all to rendering. The GPU was in fact capable of physics all along — World() just wasn’t requesting it.

Remaining question for NVIDIA: Should World() be setting these PhysxSchema attributes automatically when GPU physics is intended? This seems like a gap that would affect all Isaac Sim / Isaac Lab users on any GPU, not just Blackwell. The carb settings report gpuDynamics=True but the actual USD scene prim doesn’t have GPU dynamics enabled unless you set it manually.

Thanks again to @michalin for the low-level reproduction script that made this clear. Hopefully this helps others hitting the same issue.

2 Likes