Help me with Correct Pytorch and Torchvision versions requirement for Jetpack 6.2.1 orin super

Did both procedures, everything seems to be ok

But when i try to do the nms i get the same issue no matter what version i try

NotImplementedError: Could not run ‘torchvision::nms’ with arguments from the ‘CUDA’ backend. This could be because the operator doesn’t exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. ‘torchvision::nms’ is only available for these backends: [CPU, Meta, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMTIA, AutogradMAIA, AutogradMeta, Tracer, AutocastCPU, AutocastMTIA, AutocastMAIA, AutocastXPU, AutocastMPS, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].

I got everything to install and import following the suggestions but now get this at runtime.

libtorch_cuda_linalg.so: undefined symbol: cusolverDnXsyevBatched_bufferSize, version libcusolver.so.11

I fixed this in the past by just running:

apt install --reinstall libcusolver-12-6 libcusolver-dev-12-6

but it isn’t working in this case. I did a complete reflash on my AGX Orin using the SDK to make sure I didn’t have any lingering incompatibilities.

Hey, I did exactly as @johnny_nv did, I installed torch and torchvision from https://pypi.jetson-ai-lab.io/jp6/cu126 and I installed cudss but I still get the torchvision::nms error.

NotImplementedError: Could not run ‘torchvision::nms’ with arguments from the ‘CUDA’ backend. This could be because the operator doesn’t exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. ‘torchvision::nms’ is only available for these backends: [CPU, Meta, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMTIA, AutogradMAIA, AutogradMeta, Tracer, AutocastCPU, AutocastMTIA, AutocastMAIA, AutocastXPU, AutocastMPS, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].

torch and torchvision seems to work (as mentionned by @shahizat but I am trying to run YOLOv11 and then I encounter the nms error.

1 Like

it work at jetpack 6.1 very thank you.

  1. install cudss
  2. install pytorch (in pypi. not legacy. legacy wheel not working)
  3. check torch._version_

so, problem is clear. current jp6/cu126/torch_wheel not include cudss pacakges.

you should install cudss first before install pytorch. so you can print torch.version w/o error and torch.cuda.is_available() True

You should know that this solution is only valid for a short period. A few months ago, there were no errors or problems like this. NVIDIA Jetson developers should check their code

Just confirming after installing cudss, PyTorch 2.8.0 works on my Jetson Orin NX. Below are the steps I used to install cudss:

wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/arm64/cuda-keyring_1.1-1_all.debsudo dpkg -i cuda-keyring_1.1-1_all.debsudo apt-get updatesudo apt-get -y install cudss
1 Like

The following instructions worked for me: Quick Start Guide: NVIDIA Jetson with Ultralytics YOLO11

the latest sudo dist upgrade fixed libcusolver for libtorch_cuda_linalg.so: undefined symbol: cusolverDnXsyevBatched_bufferSize, version libcusolver.so.11

Did you do it on JP6.2 or JP6.1?
I tried on JP6.2 and it didn’t work, because I had to install different version of torch and torchvision (compared to the tutorial) to be compatible with my JP version.

I was using JP6.2. I followed the guide despite it being for 6.1 and it still worked.

Recently buying this Orin Nano 8GB has been a disaster. Why can’t the code be packaged in a way that dependencies aren’t broken?

I’ve been unable to run any of the container examples.

I came to this page to fix PyTorch and it was helpful but TorchVision is still broken!!

Running: “python ~/jetson-containers/packages/pytorch/torchvision/test.py” results in “line is “import requests” “ModuleNotFoundError: No module named ‘requests’”

Johnny_nv’s post on 9/06 seems to show that both should work after a force reinstall; but, vision does not.

I did have to fix the cudss files. because error the OP reported (ImportError: libcudss.so.0: cannot open shared object file: No such file or directory). This is fixed. Torch is fixed, but vision is not functional.

I’m getting an error: “ImportError: libcudnn.so.8: cannot open shared object file: No such file or directory”

Thank you for your support,
i was deleting cudnn and cuda from my device but after rebooting L4t has been deleted, i don’t know how to restore it again and i don’t want to use microsd to install the bootable flash for jetpack6.2 .
so is there any way to restore the my L4t >?or no ? also i don’t have female to female jumper wire to put device in recovery mode and use sdk manager PFB snap :

I think i meet a same challenge, computer: Jetson Orin Nano Super; Jetpack 6.1; CUDA 12.6; Python 3.10

I need to install Pytorch and Torchvision.I have installed torch-2.5.0a0+872d972e41.nv24.08.17622132-cp310-cp310-linux_aarch64.whl

When I try to use it, the errors : ‘ImportError: libcusparseLt.so.0: cannot open shared object file: No such file or directory‘

1 Like

if you try from flashing microSD through BalenaEtcher tool you will face issues in future, For me i shiffted to SDK manager with VM and it’s working until now but still going on …

I flashed the NVMe using SDK Manager on VMware and I tried various methods from blogs like JetsonHacks, it still didn’t work.

1 Like

Me too , previous method through VM not working .
I will try to host Ubuntu in my harddisk and try again .

You can download the library from here: Index of /compute/cusparselt/redist/libcusparse_lt/linux-aarch64

Make sure to use CUDA12 version.

Any conclusion to this topic…I am also stuck with this :(

from me i successfully installed Jetpack from Real Ubuntu 24 + SDK manager .
i can’t expalain more because real is not easy … after several days .. finally i can move on and download opencv with cuda.

Followed PyTorch for Jetson and it worked.

1 Like

Oh my goodness… I wasted 20+ hours trying to resolve all these dependencies in spare time over weeks starting before this thread was born. The @johnny_nv and the one post after for making sure to setup the cudss in that order worked. I may actually be able to run the training code from the Deep Learning course now. Thank you.

python3 -c “import torch; print(torch.cuda.is_available())”
True