Difficulties Installing PyTorch on Jetson Orin Nano

Hi there.

I’ve been following some documentation online to install PyTorch on my Jetson Orin Nano (henceforth Jetson).

Setup
Upon getting the Jetson, I followed the setup guide here. Upon entering BIOS, I noted the versions in the top left was 36.4.3. I then installed and imaged JetPack 6.2 just as the guide mentioned, and everything seemed to be in order as it booted properly.

PyTorch
After doing some coding work on the Jetson, I needed to actually figure out how to work with an AI model.
I’ve been following the steps listed here, and I’ve run into some issues. Step 2.3 has some broken syntax (wget on a different line, etc.), but when I execute bash ./install_cusparselt.shI error out. Restarting this process, thinknig I did something wrong, I removed the installed files (via rm -rf install_cusparselt.sh and rm -rf tmp_cusparselt/ ) and went line by line:

  1. cd ~
  2. wget ... works just fine when fixed
  3. export CUDA_VERSION=... I honestly don’t even know if I have CUDA installed, if that came with the image, or what.
  4. bash ./install... returns:
+ mkdir tmp_cusparselt
+ cd tmp_cusparselt
+ [[ 12.1 =~ ^12\.[1-4]$ ]]
+ arch_path=sbsa
++ uname -m
+ export TARGETARCH=aarch64
+ TARGETARCH=aarch64
+ '[' aarch64 = amd64 ']'
+ '[' aarch64 = x86_64 ']'
+ CUSPARSELT_NAME=libcusparse_lt-linux-sbsa-0.5.2.1-archive
+ curl --retry 3 -OLs https://developer.download.nvidia.com/compute/cusparselt/redist/libcusparse_lt/linux-sbsa/libcusparse_lt-linux-sbsa-0.5.2.1-archive.tar.xz
+ tar xf libcusparse_lt-linux-sbsa-0.5.2.1-archive.tar.xz
+ cp -a libcusparse_lt-linux-sbsa-0.5.2.1-archive/include/cusparseLt.h /usr/local/cuda/include/
cp: cannot create regular file '/usr/local/cuda/include/cusparseLt.h': Permission denied
  1. Thinking this was strange, removed the files again and ran 1-3 but then ran sudo bash ./install... but this just returned
+ mkdir tmp_cusparselt
+ cd tmp_cusparselt
+ [[ '' =~ ^12\.[1-4]$ ]]
+ [[ '' == \1\1\.\8 ]]
+ tar xf .tar.xz
tar: .tar.xz: Cannot open: No such file or directory
tar: Error is not recoverable: exiting now

I’ve even tried abandoning this method and going to installing a specific version of PyTorch (same guide, just further down in Next, install PyTorch with the following steps) where I used

export JP_VERSION= 621
export PYT_VERSION= 2.8.0a0+5228986c39
export TORCH_INSTALL=https://developer.download.nvidia.com/compute/redist/jp/v$JP_VERSION/pytorch/$PYT_VERSION

where the versions were confirmed for JP_VERSION here and the PYT_VERSION here. I also tried directly installing the wheel from a forum from 2019 (which I can’t link anymore links since I’m a new user, but just search up “PyTorch for Jetson” and its the first result) which covered JetPack 6.

Honestly I’m just stuck as to what I’m doing wrong with this whole thing, and would greatly appreciate some guidance. I’ve asked a couple of AIs but they’re not much help.
Thanks!

*** Please note that this reply is generated by LLM automatically ***

I’d be happy to help you troubleshoot the issues you’re facing with installing PyTorch on your Jetson Orin Nano.

Firstly, it’s great that you’ve followed the setup guide for the Jetson Orin Nano and have successfully installed and imaged JetPack 6.2. This suggests that your Jetson device is properly set up and configured.

Regarding PyTorch installation, it seems like you’ve tried multiple approaches, including:

  1. Following the steps listed in the documentation, which resulted in errors.
  2. Removing installed files and running the installation process again, which worked fine when you fixed the broken syntax.
  3. Installing a specific version of PyTorch using the pip command, which didn’t work.

To clarify, CUDA is a requirement for PyTorch, and it’s likely that CUDA is already installed on your Jetson device as part of the JetPack 6.2 image. You can verify this by running the command nvcc --version, which should display the CUDA version installed on your device.

Assuming CUDA is installed, the next step is to ensure that you’re installing the correct version of PyTorch for your JetPack version. You’ve mentioned that you’ve tried installing PyTorch using the pip command with specific versions, but it’s possible that the versions you’re using are not compatible with your JetPack version.

Here are a few suggestions to help you resolve the issue:

  1. Verify your JetPack version: Run the command dpkg -l | grep jetpack to verify that you’re running JetPack 6.2.
  2. Check PyTorch version compatibility: Ensure that you’re installing a PyTorch version that is compatible with your JetPack version. You can check the PyTorch documentation for the supported versions.
  3. Use the SDK Manager method: Instead of installing PyTorch using pip, try using the SDK Manager method to install PyTorch. This method is recommended by NVIDIA and ensures that you’re installing the correct version of PyTorch for your JetPack version.
  4. Check for dependencies: Ensure that you have all the required dependencies installed, including CUDA, cuDNN, and other libraries required by PyTorch.

If you’re still facing issues, please provide more details about the errors you’re encountering, and I’ll do my best to help you troubleshoot the problem.

In comparison to other Jetson devices, the installation process for PyTorch on the Jetson Orin Nano is similar to other devices, such as the Jetson AGX Orin and Jetson Orin NX. However, the specific versions of PyTorch and CUDA required may vary depending on the device and JetPack version.

I hope this helps, and please let me know if you have any further questions or concerns!

*** Please give the thumbs up if you get answers you like. Or provide feedback to help us improve the answer. ***

Hello auto-reply guy,

Running “nvcc --version” actually isn’t a recognized command (yields bash: nvcc: command not found). Wanting to fix this, I ran the command sudo apt install nvidia-cuda-toolkit which returns the following statements:

Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 nvidia-cuda-toolkit : Depends: nvidia-cuda-dev (= 11.5.1-1ubuntu1) but 6.2.1+b38 is to be installed
                       Recommends: nvidia-cuda-toolkit-doc (= 11.5.1-1ubuntu1) but it is not going to be installed
                       Recommends: nvidia-cuda-gdb (= 11.5.114~11.5.1-1ubuntu1) but it is not going to be installed
                       Recommends: nsight-compute (= 2021.3.1.4~11.5.1-1ubuntu1)
                       Recommends: nsight-systems (= 2021.3.3.2~11.5.1-1ubuntu1)
E: Unable to correct problems, you have held broken packages.

Seeing the dependency issue, I run sudo apt install nvidia-cuda-dev, but rerunning the toolkit command still results in the same error.

Additionally, running dpkg -l | grep jetpack yields nothing. This may be the issue.

The package name is nvidia-jetpack and if not installed just run apt install nvidia-jetpack. After installation add /usr/local/cuda/bin to your PATH for nvcc and other commands to be available w/out using full file paths.

nvidia-cuda-dev_6.2.1+b38_arm64.deb is being installed as a dependency for nvidia-jetpack. When trying to install nvidia-cuda-toolkit it pulls nvidia-cuda-toolkit_11.5.1-1ubuntu1 which depends on nvidia-cuda-dev_11.5.1-1ubuntu1 and conflicts with the already installed nvidia-cuda-dev.

Notice that when running apt install nvidia-cuda-toolkit is pulling from Ubuntu repository, thus the conflict. cuda-toolkit-12-6 is already installed as part nvidia-jetpack so Ubuntu’s nvidia-cuda-toolkit is not needed.

The instructions at Installing PyTorch for Jetson Platform - NVIDIA Docs needs updating. Trying to install the version in the example ended with not supported error.

There’s a newer version at https://developer.download.nvidia.cn/compute/redist/jp/v61/pytorch/ that successfully installs on JON:

elsaco@jetson:~$ export TORCH_INSTALL=https://developer.download.nvidia.cn/compute/redist/jp/v512/pytorch/torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl
elsaco@jetson:~$ echo $TORCH_INSTALL
https://developer.download.nvidia.cn/compute/redist/jp/v512/pytorch/torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl
elsaco@jetson:~$ pip install --no-cache $TORCH_INSTALL
Defaulting to user installation because normal site-packages is not writeable
ERROR: torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl is not a supported wheel on this platform.
elsaco@jetson:~$ export TORCH_INSTALL=https://developer.download.nvidia.cn/compute/redist/jp/v61/pytorch/torch-2.5.0a0+872d972e41.nv24.08.17622132-cp310-cp310-linux_aarch64.whl
elsaco@jetson:~$ echo $TORCH_INSTALL
https://developer.download.nvidia.cn/compute/redist/jp/v61/pytorch/torch-2.5.0a0+872d972e41.nv24.08.17622132-cp310-cp310-linux_aarch64.whl
elsaco@jetson:~$ pip install --no-cache $TORCH_INSTALL
Defaulting to user installation because normal site-packages is not writeable
Collecting torch==2.5.0a0+872d972e41.nv24.08.17622132
  Downloading https://developer.download.nvidia.cn/compute/redist/jp/v61/pytorch/torch-2.5.0a0+872d972e41.nv24.08.17622132-cp310-cp310-linux_aarch64.whl (807.0 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 807.0/807.0 MB 16.0 MB/s eta 0:00:00
Requirement already satisfied: networkx in ./.local/lib/python3.10/site-packages (from torch==2.5.0a0+872d972e41.nv24.08.17622132) (3.4.2)
Requirement already satisfied: typing-extensions>=4.8.0 in ./.local/lib/python3.10/site-packages (from torch==2.5.0a0+872d972e41.nv24.08.17622132) (4.14.1)
Requirement already satisfied: filelock in ./.local/lib/python3.10/site-packages (from torch==2.5.0a0+872d972e41.nv24.08.17622132) (3.18.0)
Requirement already satisfied: fsspec in ./.local/lib/python3.10/site-packages (from torch==2.5.0a0+872d972e41.nv24.08.17622132) (2025.7.0)
Collecting sympy==1.13.1
  Downloading sympy-1.13.1-py3-none-any.whl (6.2 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.2/6.2 MB 13.0 MB/s eta 0:00:00
Requirement already satisfied: jinja2 in ./.local/lib/python3.10/site-packages (from torch==2.5.0a0+872d972e41.nv24.08.17622132) (3.1.6)
Requirement already satisfied: mpmath<1.4,>=1.1.0 in ./.local/lib/python3.10/site-packages (from sympy==1.13.1->torch==2.5.0a0+872d972e41.nv24.08.17622132) (1.3.0)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/lib/python3/dist-packages (from jinja2->torch==2.5.0a0+872d972e41.nv24.08.17622132) (2.0.1)
Installing collected packages: sympy, torch
  Attempting uninstall: sympy
    Found existing installation: sympy 1.14.0
    Uninstalling sympy-1.14.0:
      Successfully uninstalled sympy-1.14.0
Successfully installed sympy-1.13.1 torch-2.5.0a0+872d972e41.nv24.8
elsaco@jetson:~$ python
Python 3.10.12 (main, May 27 2025, 17:12:29) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> print(torch.__version__)
2.5.0a0+872d972e41.nv24.08
>>> print(torch.cuda.is_available())
True
>>>

The above example was for torch-2.5. If you the latest (PyTorch-2.8) download the wheel from jp6/cu126 index

Hi,

As mentioned above, please try to install PyTorch from our server below:

Thanks.

This works wonderfully, thank you! I don’t think I would’ve been able to solve it otherwise.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.