Install Pytorch with cuda on Jetson Orin nano Devloper Kit

Hi i am trying to install Pytorch cuda for Jetpack 6. Here are the following details :
Jetpack Details :

Package: nvidia-jetpack
Source: nvidia-jetpack (6.0)
Version: 6.0+b106
Architecture: arm64
Maintainer: NVIDIA Corporation
Installed-Size: 194
Depends: nvidia-jetpack-runtime (= 6.0+b106), nvidia-jetpack-dev (= 6.0+b106)
Homepage: http://developer.nvidia.com/jetson
Priority: standard
Section: metapackages
Filename: pool/main/n/nvidia-jetpack/nvidia-jetpack_6.0+b106_arm64.deb
Size: 29296
SHA256: 561d38f76683ff865e57b2af41e303be7e590926251890550d2652bdc51401f8
SHA1: ef3fca0c1b5c780b2bad1bafae6437753bd0a93f
MD5sum: 95de21b4fce939dee11c6df1f2db0fa5
Description: NVIDIA Jetpack Meta Package
Description-md5: ad1462289bdbc54909ae109d1d32c0a8

Package: nvidia-jetpack
Source: nvidia-jetpack (6.0)
Version: 6.0+b87
Architecture: arm64
Maintainer: NVIDIA Corporation
Installed-Size: 194
Depends: nvidia-jetpack-runtime (= 6.0+b87), nvidia-jetpack-dev (= 6.0+b87)
Homepage: http://developer.nvidia.com/jetson
Priority: standard
Section: metapackages
Filename: pool/main/n/nvidia-jetpack/nvidia-jetpack_6.0+b87_arm64.deb
Size: 29298
SHA256: 70be95162aad864ee0b0cd24ac8e4fa4f131aa97b32ffa2de551f1f8f56bc14e
SHA1: 36926a991855b9feeb12072694005c3e7e7b3836
MD5sum: 050cb1fd604a16200d26841f8a59a038
Description: NVIDIA Jetpack Meta Package
Description-md5: ad1462289bdbc54909ae109d1d32c0a8

downloading pytorch using following link :
https://developer.download.nvidia.com/compute/redist/jp/v60/pytorch/
Error:
ERROR: torch-2.2.0a0+81ea7a4.nv23.12-cp310-cp310-linux_aarch64.whl is not a supported wheel on this platform.

Kindly help in resolving this issue.

Hi,

The package name for JetPack 6.0 should be torch-2.4.0a0+07cecf4168.nv24.05.14710581-cp310-cp310-linux_aarch64.whl.
Please double-check it.

Thanks.

ERROR: torch-2.4.0a0+07cecf4168.nv24.05.14710581-cp310-cp310-linux_aarch64.whl is not a supported wheel on this platform.

Got same error.

ERROR: torch-2.4.0a0+07cecf4168.nv24.05.14710581-cp310-cp310-linux_aarch64.whl is not a supported wheel on this platform.

Hi,

Please try the below command:

$ wget https://developer.download.nvidia.com/compute/redist/jp/v60/pytorch/torch-2.4.0a0+07cecf4168.nv24.05.14710581-cp310-cp310-linux_aarch64.whl -O torch-2.4.0a0+07cecf4168.nv24.05.14710581-cp310-cp310-linux_aarch64.whl
$ pip3 install torch-2.4.0a0+07cecf4168.nv24.05.14710581-cp310-cp310-linux_aarch64.whl

If the error keeps showing, please share the output of the below command with us.

$ md5sum torch-2.4.0a0+07cecf4168.nv24.05.14710581-cp310-cp310-linux_aarch64.whl
$ apt show nvidia-jetpack

Thanks.

Hi,

(base) orin@ubuntu:~$ wget https://developer.download.nvidia.com/compute/redist/jp/v60/pytorch/torch-2.4.0a0+07cecf4168.nv24.05.14710581-cp310-cp310-linux_aarch64.whl -O torch-2.4.0a0+07cecf4168.nv24.05.14710581-cp310-cp310-linux_aarch64.whl
--2024-06-27 16:58:39--  https://developer.download.nvidia.com/compute/redist/jp/v60/pytorch/torch-2.4.0a0+07cecf4168.nv24.05.14710581-cp310-cp310-linux_aarch64.whl
Resolving developer.download.nvidia.com (developer.download.nvidia.com)... 152.199.39.144
Connecting to developer.download.nvidia.com (developer.download.nvidia.com)|152.199.39.144|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1047045276 (999M) [application/octet-stream]
Saving to: ‘torch-2.4.0a0+07cecf4168.nv24.05.14710581-cp310-cp310-linux_aarch64.whl’

torch-2.4.0a0+07cecf4168.nv24.05.14710581-cp310-cp310-linux 100%[========================================================================================================================================>] 998.54M  13.8MB/s    in 73s

2024-06-27 16:59:52 (13.7 MB/s) - ‘torch-2.4.0a0+07cecf4168.nv24.05.14710581-cp310-cp310-linux_aarch64.whl’ saved [1047045276/1047045276]

(base) orin@ubuntu:~$ pip3 install torch-2.4.0a0+07cecf4168.nv24.05.14710581-cp310-cp310-linux_aarch64.whl
ERROR: torch-2.4.0a0+07cecf4168.nv24.05.14710581-cp310-cp310-linux_aarch64.whl is not a supported wheel on this platform.

Requested Output:

(base) orin@ubuntu:~$ md5sum torch-2.4.0a0+07cecf4168.nv24.05.14710581-cp310-cp310-linux_aarch64.whl
8f62444f077b923bce7de839a2ecc463  torch-2.4.0a0+07cecf4168.nv24.05.14710581-cp310-cp310-linux_aarch64.whl
(base) orin@ubuntu:~$ apt show nvidia-jetpack
Package: nvidia-jetpack
Version: 6.0+b106
Priority: standard
Section: metapackages
Source: nvidia-jetpack (6.0)
Maintainer: NVIDIA Corporation
Installed-Size: 199 kB
Depends: nvidia-jetpack-runtime (= 6.0+b106), nvidia-jetpack-dev (= 6.0+b106)
Homepage: http://developer.nvidia.com/jetson
Download-Size: 29.3 kB
APT-Sources: https://repo.download.nvidia.com/jetson/common r36.3/main arm64 Packages
Description: NVIDIA Jetpack Meta Package

N: There are 2 additional records. Please use the '-a' switch to see them.

Hi,

The package looks good in your environment.
But it is built with Python 3.10, which Python version do you use in the virtual environment?

Thanks.

Hi,
I am able to get cuda output.
by using following code

import torch

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(f"Using device: {device}")

Using device: cuda

But now when i am running yolov8 object detection model i am getting error from torchvision.

Traceback (most recent call last):
  File "/home/orin/Desktop/pytorchModel/yolov8.py", line 10, in <module>
    results = model(source, device=0)  # list of Results objects
  File "/home/orin/anaconda3/envs/gpu/lib/python3.10/site-packages/ultralytics/engine/model.py", line 174, in __call__
    return self.predict(source, stream, **kwargs)
  File "/home/orin/anaconda3/envs/gpu/lib/python3.10/site-packages/ultralytics/engine/model.py", line 442, in predict
    return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream)
  File "/home/orin/anaconda3/envs/gpu/lib/python3.10/site-packages/ultralytics/engine/predictor.py", line 168, in __call__
    return list(self.stream_inference(source, model, *args, **kwargs))  # merge list of Result into one
  File "/home/orin/anaconda3/envs/gpu/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 35, in generator_context
    response = gen.send(None)
  File "/home/orin/anaconda3/envs/gpu/lib/python3.10/site-packages/ultralytics/engine/predictor.py", line 234, in stream_inference
    self.model.warmup(imgsz=(1 if self.model.pt or self.model.triton else self.dataset.bs, 3, *self.imgsz))
  File "/home/orin/anaconda3/envs/gpu/lib/python3.10/site-packages/ultralytics/nn/autobackend.py", line 625, in warmup
    import torchvision  # noqa (import here so torchvision import time not recorded in postprocess time)
  File "/home/orin/anaconda3/envs/gpu/lib/python3.10/site-packages/torchvision/__init__.py", line 6, in <module>
    from torchvision import _meta_registrations, datasets, io, models, ops, transforms, utils
  File "/home/orin/anaconda3/envs/gpu/lib/python3.10/site-packages/torchvision/_meta_registrations.py", line 164, in <module>
    def meta_nms(dets, scores, iou_threshold):
  File "/home/orin/anaconda3/envs/gpu/lib/python3.10/site-packages/torch/library.py", line 486, in inner
    handle = entry.abstract_impl.register(func_to_register, source)
  File "/home/orin/anaconda3/envs/gpu/lib/python3.10/site-packages/torch/_library/abstract_impl.py", line 30, in register
    if torch._C._dispatch_has_kernel_for_dispatch_key(self.qualname, "Meta"):
RuntimeError: operator torchvision::nms does not exist

Here are details of torch, torchaudio and torchvision:

torch              2.4.0a0+07cecf4168.nv24.5
torchaudio         2.0.2
torchvision        0.18.1

Hi,

How do you install TorchVision and TorchAudio, do the packages build with CUDA support?

For JetPack 6.0, you can find the prebuilt TorchVision and TorchAudio in the below topic.
Could you give it a try?

Thanks.

I tried with mentioned suggestion but got below error.
Kindly provide correct file and instructions to install torchvision and torchaudio.

Thanks

Hi,

The below command works in our environment.
Please give it a try.

$ wget https://nvidia.box.com/shared/static/9si945yrzesspmg9up4ys380lqxjylc3.whl -O torchaudio-2.3.0+952ea74-cp310-cp310-linux_aarch64.whl
$ pip3 install torchaudio-2.3.0+952ea74-cp310-cp310-linux_aarch64.whl
$ wget https://nvidia.box.com/shared/static/u0ziu01c0kyji4zz3gxam79181nebylf.whl -O torchvision-0.18.0a0+6043bc2-cp310-cp310-linux_aarch64.whl
$ pip3 install torchvision-0.18.0a0+6043bc2-cp310-cp310-linux_aarch64.whl

Thanks.

Thanks @AastaLLL for your help. It worked. Just last question if i have python 11 version, from where i can download whl files. Kindly provide link to access whl files for different python versions.

Thanks

For other versions of Python than the default, you will need to build PyTorch from source like shown in this topic, or using jetson-containers:

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.