Incompatibility of torchaudio in NGC PyTorch Container 25.12 on DGX Spark (Blackwell GB10)

Hello,

I recently acquired the new NVIDIA DGX Spark (Blackwell GB10 / aarch64 architecture). I am currently using the official NVIDIA NGC container: nvcr.io/nvidia/pytorch:25.12-py3.

While the pre-installed PyTorch 2.10.0a0 works perfectly with the GB10 GPU, I found that torchaudio is missing from the container. This is a critical issue for ASR/TTS workflows.

I have attempted the following solutions without success:

  1. PIP Installation: Running pip install torchaudio fetches the +cpu version from PyPI, which uninstalls the optimized NVIDIA PyTorch build and breaks CUDA support.

  2. Source Compilation: Attempting to build torchaudio from source (main branch) against the container’s PyTorch fails due to missing or relocated headers (e.g., torch/csrc/stable/device.h and torch/headeronly/core/TensorAccessor.h). It seems the header layout in this specific 2.10.0a0 alpha build is non-standard.

  3. NVIDIA PyPI: I checked pypi.nvidia.com, but could not find a wheel for torchaudio that matches this specific PyTorch build on aarch64.

As a DGX user, I expect a validated and optimized software stack.

My Questions:

  1. Why is torchaudio not included in the PyTorch 25.12 NGC container for aarch64?

  2. Can NVIDIA provide an official .whl for torchaudio that is ABI-compatible with the pre-installed PyTorch 2.10.0a0 on Blackwell?

  3. If not, what is the validated procedure to build a GPU-accelerated torchaudio for this system?

This is blocking our production ASR deployment on this expensive hardware. Any immediate assistance or an official patch would be greatly appreciated.

Hi ,

this is a known issue in torchcodec. It seems no ARM+CUDA binaries are built for TorchCode.

A workaround is to use conda and the conda-forge channel

I haven’t really tested torchaudio, so I guess if you try it let me know, but I released some vllm + pytorch containers that include updated builds of torch, torchvision, and torchaudio.

This is where I announced the vllm images:

Forum Link: New pre-built vLLM Docker Images for NVIDIA DGX Spark

The vllm images are built on top of updated pytorch images.

You can try to use the image:

scitrera/dgx-spark-pytorch-dev:2.10.0-rc6-cu131

And see if that works for you.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.