Help me with Correct Pytorch and Torchvision versions requirement for Jetpack 6.2.1 orin super

Hello (sighs),

I’m unable to find proper versions for cuda enabled pytorch and its matching torchvision. I have tried multiple links, forum guides etc but all of them leads to some version mismatch errors in the end.

Can someone pls help me biuld correct and latest working versions? (If u had tried few weeks/months before this post then forget replying as they no longer work).

To start this thread with formal trial and error example, I’ll show my recent work below:

  1. Downloaded pytorch 2.8.0 from jp6/cu126 index [https://pypi.jetson-ai-lab.io/jp6/cu126/+f/564/4d4458f1ba159/torch-2.8.0-cp310-cp310-linux_aarch64.whl#sha256=5644d4458f1ba15950995f17f6ea91f3b3e4adf0d1dfef816b04a5d7325598c8\\]

  2. Installed the cusparselt and then Installed using pip install <.whl>

3. Ran python3 in terminal to import torch but ended up getting:

import torch
Traceback (most recent call last):
File “”, line 1, in
File “/home/nvidia/.local/lib/python3.10/site-packages/torch/init.py”, line 416, in
from torch._C import * # noqa: F403
ImportError: libnccl.so.2: cannot open shared object file: No such file or directory
quit()

before this one more error had come:
nvidia@nvidia-desktop:~$ python3
Python 3.10.12 (main, Aug 15 2025, 14:32:43) [GCC 11.4.0] on linux
Type “help”, “copyright”, “credits” or “license” for more information.

import torch
Traceback (most recent call last):
File “”, line 1, in
File “/home/nvidia/.local/lib/python3.10/site-packages/torch/init.py”, line 416, in
from torch._C import * # noqa: F403
ImportError: libcudss.so.0: cannot open shared object file: No such file or directory

but solved that using chatgpt (below is the response from chatgpt so solve ImportError: libcudss.so.0: cannot open shared object file: No such file or directory only :

CHATGPT reply below:

1. Download the Correct Archive

Run this command:

mkdir -p tmp_cudss && cd tmp_cudss
CUSPARSE_SOLVER_NAME="libcudss-linux-sbsa-0.6.0.5_cuda12-archive"
curl -L -O https://developer.download.nvidia.com/compute/cudss/redist/libcudss/linux-sbsa/${CUSPARSE_SOLVER_NAME}.tar.xz

Note: The -L flag ensures redirects are followed.

2. Extract and Install

tar xf ${CUSPARSE_SOLVER_NAME}.tar.xz
sudo cp -a ${CUSPARSE_SOLVER_NAME}/include/* /usr/local/cuda/include/
sudo cp -a ${CUSPARSE_SOLVER_NAME}/lib/* /usr/local/cuda/lib64/
cd ..
rm -rf tmp_cudss
sudo ldconfig

3. Verify It Worked

ls /usr/local/cuda/lib64 | grep cudss

You should see:

libcudss.so
libcudss.so.0

4. Test PyTorch

python3 -c "import torch; print(torch.__version__)"

If everything went well, the import should now work without the missing library error.

#############################################################################

Coming back to the PyTorch wheels provided by NVIDIA — I’ve noticed some inconsistencies. Initially, the links pointed to a .dev domain, then later changed to .io, and the available wheels seem to change over time. Some versions install, but then lead to runtime errors.

This makes the developer experience quite frustrating. I’ve spent several days just trying to get PyTorch and torchvision set up properly on the Jetson kit, and it’s been more difficult than expected.

I really like the potential of the Jetson platform, but I think many users would benefit from clearer, stable, and well-documented support for essential libraries like PyTorch and torchvision.

Does anyone have a known working set of versions (torch + torchvision) for JetPack 6.2.1 that they could share? That would save a lot of time for developers who are just trying to get their projects running.

1 Like

HELP!!!

Hey, I initially got the error:
ImportError: libcudss.so.0: cannot open shared object file: No such file or directory.

So, I proceeded to follow your steps and I ended up having the error of:
ImportError: libnccl.so.2: cannot open shared object file: No such file or directory

Why am I getting the 2nd error when I thought the solution you provided is meant to help solve for both of it. Please help. I am using the same architecture Jetpack 6.2, torch 2.8.0 on an AGX orin

Damn bruh, if that was my solution then why would I cry for help here lol!… there’s something going on with jetsons and all of them fail to succeed. I don’t know why it won’t work and nobody cares here to help :(

I got the same issue on Jetson Orin AGX with Jetpack 6.2. Any solutions?

I noticed that torch 2.8.0 in jp6/cu126 index now has different SHA than what I installed like 1-2 months ago. Whay can’t we get consistent setup for Jetson devices. It has been frustrating experience.

2 Likes

As far as I know, there are no solns. There was a solution 1w ago but the files were removed from pypi jetson-ai website. Even I’m waiting for help from many days! :)
HELPPPPP!!!

1 Like

Hi,

Please get the libnccl pacakge from (JP6.2: cuda 12.6)

https://developer.nvidia.com/nccl/nccl-legacy-download

If you encounter the undefined symbol error, please reinstall the torch whl

Thanks

I was previously able to successfully install and import Torch using the wheels from https://pypi.jetson-ai-lab.io/jp6/cu126. However, at some point, after installing Torch, import torch started failing. Could there be a problem with the torch-2.8.0-cp310-cp310-linux_aarch64.whl build?

1 Like


David, I’ve reinstalled the same torch wheel but again ran into same error!? :(

Edit: I had downloaded this: nccl-local-repo-ubuntu2204-2.24.3-cuda12.6_1.0-1_arm64.deb

Hi,

We will spend some time to check whether we could reproduce in our side.

When download the nccl, we use below commands as download center show

sudo apt install libnccl2=2.22.3-1+cuda12.6 libnccl-dev=2.22.3-1+cuda12.6

Thanks

Hmm, with the latest .deb file I downloaded , it had this command:

nvidia@nvidia-desktop:~/Downloads$ sudo apt install libnccl2=2.24.3-1+cuda12.6 libnccl-dev=2.24.3-1+cuda12.6
Reading package lists… Done
Building dependency tree… Done
Reading state information… Done
libnccl-dev is already the newest version (2.24.3-1+cuda12.6).
libnccl2 is already the newest version (2.24.3-1+cuda12.6).
The following packages were automatically installed and are no longer required:
libpaps0 paps
Use ‘sudo apt autoremove’ to remove them.
0 upgraded, 0 newly installed, 0 to remove and 72 not upgraded.
nvidia@nvidia-desktop:~/Downloads$

Hi!

I have the same error “undefined symbol: ncclCommWindowDeregister” ,
and I fix it by re-install the whl, like DavidDDD said.

pip3 install torch-2.8.0-cp310-cp310-linux_aarch64.whl --force-reinstall
pip3 install torchvision-0.23.0-cp310-cp310-linux_aarch64.whl --force-reinstall

Now the error is gone.
But it shows
“torch: 2.8.0+cpu
CUDA available: False”

with the following code:
import torchprint(“torch:”, torch.version)print(“CUDA available:”, torch.cuda.is_available())if torch.cuda.is_available():print(“CUDA:”, torch.version.cuda)print(“Device:”, torch.cuda.get_device_name(0))

Does this mean torch will run with cpu only?
How can I get CUDA available with the gpu version for jetpack6.2 on NVIDIA Jetson Orin Nano™ 8GB Super Mode?
Thanks a lot!

I tried this. Still got an error

Python 3.10.18 | packaged by conda-forge | (main, Jun 4 2025, 14:39:45) [GCC 13.3.0] on linux
Type “help”, “copyright”, “credits” or “license” for more information.

import torch
Traceback (most recent call last):
File “”, line 1, in
File “/data/anaconda3/envs/satnerf/lib/python3.10/site-packages/torch/init.py”, line 416, in
from torch._C import * # noqa: F403
ImportError: /data/anaconda3/envs/satnerf/lib/python3.10/site-packages/torch/lib/libtorch_cuda.so: undefined symbol: ncclCommWindowDeregister

Hi All,

Thanks for your patience.

We update latest wheel in * jp6/cu126 index .*

Please download and reinstall again.

Thanks

Hello, i’m investigating this.

pip install torch --index-url https://pypi.jetson-ai-lab.io/jp6/cu126

Now pytorch has some features that in the past not had.

  1. cudss cuDSS | NVIDIA Developer
  2. cusparselt cuSPARSELt 0.8.0 Downloads | NVIDIA Developer
  3. nccl(not distributed by us for cuda-tegra) must be installed from github GitHub - NVIDIA/nccl: Optimized primitives for collective multi-GPU communication

I found the problem, it will be fixed soon.

YAY - I hope so this has been driving me mad!

About nccl the problem is was built with export USE_SYSTEM_NCCL=1, it will be fixed in some hours:

could you try now?