I got into trouble that I can’t install torchaudio in this image.Is there any way or whl file that can help? I’ve tried to build it from source code in branch v2.8.0 but it doesn’t work.
/usr/local/lib/python3.12/dist-packages/torch/include/ATen/core/TensorBody.h:256:1: note: declared here
256 | GenericPackedTensorAccessor<T,N,PtrTraits,index_t> packed_accessor() const & {
| ^ ~~~~~~~~~~~~~
/workspace/audio/src/libtorchaudio/iir_cuda.cu:67:240: warning: ‘at::GenericPackedTensorAccessor<T, N, PtrTraits, index_t> at::Tensor::packed_accessor() const & [with T = float; long unsigned int N = 2; PtrTraits = at::RestrictPtrTraits; index_t = long unsigned int]’ is deprecated: packed_accessor is deprecated, use packed_accessor32 or packed_accessor64 instead [-Wdeprecated-declarations]
67 | AT_DISPATCH_FLOATING_TYPES(
| ^
/usr/local/lib/python3.12/dist-packages/torch/include/ATen/core/TensorBody.h:256:1: note: declared here
256 | GenericPackedTensorAccessor<T,N,PtrTraits,index_t> packed_accessor() const & {
| ^ ~~~~~~~~~~~~~
/workspace/audio/src/libtorchaudio/iir_cuda.cu:67:324: warning: ‘at::GenericPackedTensorAccessor<T, N, PtrTraits, index_t> at::Tensor::packed_accessor() const & [with T = float; long unsigned int N = 3; PtrTraits = at::RestrictPtrTraits; index_t = long unsigned int]’ is deprecated: packed_accessor is deprecated, use packed_accessor32 or packed_accessor64 instead [-Wdeprecated-declarations]
67 | AT_DISPATCH_FLOATING_TYPES(
| ^
/usr/local/lib/python3.12/dist-packages/torch/include/ATen/core/TensorBody.h:256:1: note: declared here
256 | GenericPackedTensorAccessor<T,N,PtrTraits,index_t> packed_accessor() const & {
| ^ ~~~~~~~~~~~~~
ninja: build stopped: subcommand failed.
Here’s a way to compile torchaudio in the container:
wget https://github.com/pytorch/audio/archive/refs/tags/v2.8.0.tar.gz and tar xfz it.
apt update
apt install libavformat-dev libavcodec-dev libavutil-dev libavdevice-dev libavfilter-dev
cd audio #or whatever it un-tars as.
USE_CUDA=1 python3 -m pip install -v . --no-use-pep517
#When it is done from another terminal
docker ps
#Copy the containerid
docker commit -m "Installed torchaudio" -a "Your Name" replace_containerid pytorch28-audio:1.0 #or whatever name you'd like but include a :tag
#Then exit the running container. And run your new image
docker run -it --net=host --runtime nvidia --privileged --ipc=host --ulimit memlock=-1 \
--ulimit stack=67108864 -v $(pwd):/workspace pytorch28-audio:1.0 bash
download the cu130 torch2.9 and torchaudio from the same website. put them where your -v points do pip install -U torch*.whl torchaudio*.whl and see if container is functional. If so docker commit it ?
Here’s some github.com/pytorch/pytorch issues what seem to be about the python version 2.8.0a0+34c6371d24 in that specific image. So the other thing you could do if above isn’t practicable would be to compile torch and torchaudio with git clone -b releases/tag/v2.8.0
Yes.Actually it is feasible if I just install the torch/torchvision/torchaudio packages from sbsa/cu130 index.But I think the torch package in this image may have some optimization for thor?So I’d like to know that if anyone built the torchaudio successfully in this image that can share some experience on it.Btw,thank you so much for your advice.
Create cuda13.0_packages.txt and apply it “pip install -r cuda13.0_packages.txt”
# this may be overboard for a container but it's my requirements.txt for file for my Thor. cupy-cuda13x holds cub which is required.
--extra-index-url https://pypi.nvidia.com
cuda-bindings
cuda-core
cuda-pathfinder
cuda-python
cupy-cuda13x
nvidia-cublas
nvidia-cuda-crt
nvidia-cuda-cupti
nvidia-cuda-nvcc
nvidia-cuda-nvrtc
nvidia-cuda-nvrtc
nvidia-cuda-runtime
nvidia-cuda-runtime
nvidia-cudnn-cu13
nvidia-cufile
nvidia-cusparselt-cu13
nvidia-nccl-cu13
nvidia-nvimgcodec-tegra-cu13
nvidia-nvjitlink
nvidia-nvjpeg2k-tegra-cu13
nvidia-nvtx
nvidia-nvvm
nvmath-python
nvtx
pip install -U ninja cmake setuptools
wget https://github.com/pytorch/audio/archive/refs/tags/v2.8.0.tar.gz
tar xfz v2.8.0.tar.gz
cd audio-2.8.0
nano src/libtorchaudio/forced_align/gpu/compute.cu
// add after last #include
#include <cuda/functional> // for cuda::maximum / cuda::minimum
#include <cuda/std/functional> // for cuda::std::plus / minus / etc.
# Change this line:
scalar_t maxResult = BlockReduce(tempStorage).Reduce(threadMax, cub::Max());
# To:
scalar_t maxResult = BlockReduce(tempStorage).Reduce(threadMax, cuda::maximum<scalar_t>{});
Create fix_cccl_fplimits.sh
#!/usr/bin/env bash
# fix_cccl_fplimits.sh
# Usage: bash fix_cccl_fplimits.sh /workspace/.git/audio-2.8.0/src/libtorchaudio/cuctc/src/ctc_prefix_decoder_kernel_v2.cu
set -euo pipefail
FILE="${1:?pass path to .cu file to patch}"
[[ -f "$FILE" ]] || { echo "ERROR: $FILE not found"; exit 1; }
cp -a "$FILE" "$FILE.bak"
# Replace cub::FpLimits<T>::Lowest() -> cuda::std::numeric_limits<T>::lowest()
# (and a few siblings while we're here)
sed -E -i '
s/cub::[[:space:]]*FpLimits[[:space:]]*<([^>]+)>::[[:space:]]*Lowest[[:space:]]*\(\)/cuda::std::numeric_limits<\1>::lowest()/g;
s/cub::[[:space:]]*FpLimits[[:space:]]*<([^>]+)>::[[:space:]]*Max[[:space:]]*\(\)/cuda::std::numeric_limits<\1>::max()/g;
s/cub::[[:space:]]*FpLimits[[:space:]]*<([^>]+)>::[[:space:]]*Min[[:space:]]*\(\)/cuda::std::numeric_limits<\1>::min()/g;
s/cub::[[:space:]]*FpLimits[[:space:]]*<([^>]+)>::[[:space:]]*Infinity[[:space:]]*\(\)/cuda::std::numeric_limits<\1>::infinity()/g;
' "$FILE"
# Ensure we have the limits header
if ! grep -q '<cuda/std/limits>' "$FILE"; then
# Insert right before the first #include
sed -i "0,/#include/s//#include <placeholder>\n&/" "$FILE"
sed -i "s|#include <placeholder>|#include <cuda/std/limits>|" "$FILE"
fi
echo "Patched $FILE (backup at $FILE.bak)"
# And apply it
./fix_cccl_fplimits.sh src/libtorchaudio/cuctc/src/ctc_prefix_decoder_kernel_v2.cu
The container has PyTorch 2.9.0 inside, although it is not compatible with the torchaudio shared on the jetson-ai-lab.io.
But please try to build it from the source to see if it can work.
pytorch/audio release v2.9 has been updated for Cuda-13 and obviates the need for the above patches. Edit: presumed ffmpeg was installed on pytorch:25.09-py3 since it is on pytorch:25.08-py3. Just added items to install ffmpeg and a couple of related libraries.
If you have ngc installed pull the image.
ngc registry image pull nvcr.io/nvidia/pytorch:25.09-py3
Save if desired "dist/torchaudio-2.9.0-cp312-cp312-linux_aarch64.whl"
#Once torchaudio is installed in the container; from another terminal:
docker ps
#Copy the containerid
docker commit -m "Installed torchaudio" -a "Your Name" replace_containerid pytorch29-audio:2.9 #or whatever name you'd like but include a :tag
#Then exit the running container. And run your new image
docker run -it --net=host --runtime nvidia --privileged --ipc=host --ulimit memlock=-1 \
--ulimit stack=67108864 -v $(pwd):/workspace pytorch29-audio:2.9 bash