How to install pytorch in thor

i use SDKManage flash thor , but i find there not install torch , how to install torch ?

Tue Sep 9 15:36:03 2025
±----------------------------------------------------------------------------------------+
| NVIDIA-SMI 580.00 Driver Version: 580.00 CUDA Version: 13.0 |
±----------------------------------------±-----------------------±---------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA Thor On | 00000000:01:00.0 Off | N/A |
| N/A N/A N/A N/A / N/A | Not Supported | 0% Default |
| | | Disabled |
±----------------------------------------±-----------------------±---------------------+

±----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 2343 G /usr/lib/xorg/Xorg 0MiB |
| 0 N/A N/A 2424 G /usr/bin/gnome-shell 0MiB |
±---------------------------

ERROR: torch-2.8.0-cp313-cp313-manylinux_2_28_aarch64.whl is not a supported wheel on this platform.

oh, no~

Hi

Yes the wheel is for r36.x

Please refer to below download centers to get the compatible wheel

Thanks

1 Like

I’ve downloaded torch-2.9.0 from pypi.jetson-ai-lab.io and when I used it, I got an import error:ImportError: libnvpl_lapack_lp64_gomp.so.0: cannot open shared object file: No such file or directory,I flash my thor and install cuda,etc with SDKmanager.How cana I solve it?

I found solution in Jetson/L4T/Jetson AI Stack - eLinux.org. Just follow the native pytorch installation is ok.Need to install NVPL first.

Another way using docker:

docker run --rm -it -v "$PWD":/workspace -w /workspace --gpus all --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 nvcr.io/nvidia/pytorch:25.08-py3

=============

== PyTorch ==

=============

NVIDIA Release 25.08 (build 197421315)

PyTorch Version 2.8.0a0+34c6371

Container image Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

  • this has cuda 13.0 too

When i use the solution in Jetson/L4T/Jetson AI Stack - eLinux.org, import torch is ok, but import torchvision causes error:

import torchvision
Traceback (most recent call last):
File “”, line 1, in
File “/home/tankailin/torch2.9_new/lib/python3.12/site-packages/torchvision/init.py”, line 10, in
from torchvision import _meta_registrations, datasets, io, models, ops, transforms, utils # usort:skip
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/tankailin/torch2.9_new/lib/python3.12/site-packages/torchvision/_meta_registrations.py”, line 163, in
@torch.library.register_fake(“torchvision::nms”)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/tankailin/torch2.9_new/lib/python3.12/site-packages/torch/library.py”, line 1063, in register
use_lib._register_fake(
File “/home/tankailin/torch2.9_new/lib/python3.12/site-packages/torch/library.py”, line 211, in _register_fake
handle = entry.fake_impl.register(
^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/tankailin/torch2.9_new/lib/python3.12/site-packages/torch/_library/fake_impl.py”, line 50, in register
if torch._C._dispatch_has_kernel_for_dispatch_key(self.qualname, “Meta”):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: operator torchvision::nms does not exist

I believe to get torcvision::nms you need to compile/install from source. This should do it.

sudo apt update
sudo apt install python3-pip 

python3 -m pip install -U pip wheel 'setuptools>=69' cython cmake
sudo apt install ffmpeg 

# edit: I just built this and found I needed the next 4 lines to compile it.
pip install -U pybind11

PYBIND11_INC="$(python3 -c 'import pybind11, sys; print(pybind11.get_include())')"
export CPATH="$PYBIND11_INC${CPATH:+:$CPATH}"
export CXXFLAGS="-I$PYBIND11_INC ${CXXFLAGS:-}"

export TORCH_CUDA_ARCH_LIST="11.0"
export FORCE_CUDA=1  #this env var is what causes compilation of nms.
export MAX_JOBS=10

git clone -b release/0.24 https://github.com/pytorch/vision.git torchvision
cd torchvision

python -m pip install . --no-build-isolation -v

After installation a quick verification:

python3 - <<'EOF'
import torch, torchvision, os
print("Torch:", torch.__version__, "Torchvision:", torchvision.__version__)
try:
    from torchvision.ops import nms
    boxes  = torch.rand(1000,4,device='cuda') * 512
    boxes[:,2:] += boxes[:,:2]  # make (x2,y2) ≥ (x1,y1)
    scores = torch.rand(1000,device='cuda')
    keep   = nms(boxes, scores, 0.5)
    print("CUDA NMS succeeded – kept", keep.numel(), "boxes")
except Exception as e:
    print("NMS failed:", e)
EOF

edit: ran above and got this result:

Torch: 2.9.0a0+gitce928e1 Torchvision: 0.24.0+e437e35
CUDA NMS succeeded – kept 426 boxes

The link for pytorch whl file has been removed. Any idea where else one can download/build it for Thor.

You can also download the whl file in sbsa/cu130 index

Yet this does not entail CU13, the version available is the CPU variant

| [

I used the torch whl file from this site and I got this result.

1 Like

Hey guys and thanks for the info so far.

On my freshly set up Thor. I tried to do what you suggested.
I created a new venv and pip installed this .whl. (after installing NVPL)

But when trying import torch I get:
OSError: libcudart.so.13: cannot open shared object file: No such file or directory

Any idea what could be the problem?

I wonder if libcudart is actually missing (which I can’t imagine) or if Python just can’t see it smh..
I did make sure to use the option —system-site-packages in venv

Best
Ben

Are you using anaconda/miniconda?I removed libc++ enviroment from anaconda. The command I used is below:

cd ~/anaconda3/envs/vllm/lib
mv libstdc++.so.6 libstdc++.so.6.bak
vim ~/.bashrc
export CUDA_HOME=/usr/local/cuda-13.0
export PATH=$CUDA_HOME/bin:$PATH
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
export TRITON_PTXAS_PATH=/usr/local/cuda-13.0/bin/ptxas
source ~/.bashrc

hope this will help you

hmm interestingly, there is no “cuda-<..>” directory in /usr/local on my machine..

Seems like I missed some parts of the device setup. I assumed it’d ship with CUDA etc preconfigured.. I will report back

Update: well I simply forgot to install the Jetpack SDK. Not my proudest moment.

Anyway, thanks @975593335 for the quick response. Everything works now. Btw I use venv, I didn’t have to make the adjustments to libstdc++ that you mentioned, just installing the .whl was enough.

Cheers
Ben

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.