PyTorch for Jetson

When I try to import torchvision it is showing an error. SyntaxError: future feature annotations is not defined. I am using jetpack 4.6 rev3 with pytorch v1.8.0 and torchvision v0.9.0 .

Hi @arjunslvrj, can you try running pip3 install 'pillow<9' ?

1 Like

Thank you dusty

CMake can’t find TorchVision when using l4t-pytorch:r35.1.0-pth1.13-py3 docker image and warning of missing kineto library when using libtorch. I am currently trying to load a torchscript model in C++ that requires me to inclued torchvision/vision.h .

CMake Error at CMakeLists.txt:24 (find_package):
  By not providing "FindTorchVision.cmake" in CMAKE_MODULE_PATH this project
  has asked CMake to find a package configuration file provided by
  "TorchVision", but CMake did not find one.

  Could not find a package configuration file provided by "TorchVision" with
  any of the following names:

    TorchVisionConfig.cmake
    torchvision-config.cmake

  Add the installation prefix of "TorchVision" to CMAKE_PREFIX_PATH or set
  "TorchVision_DIR" to a directory containing one of the above files.  If
  "TorchVision" provides a separate development package or SDK, be sure it
  has been installed.

I have tried to add the following CMAKE_PREFIX_PATHs with no sucess:

list(APPEND  /usr/local/lib/python3.8/dist-packages/torchvision-0.13.0a0+da3794e-py3.8-linux-aarch64.egg/)
list(APPEND  /usr/local/lib/python3.8/dist-packages/torchvision-0.13.0a0+da3794e-py3.8-linux-aarch64.egg/torchvision/)

CMake can find libtorch when I use the following:
list(APPEND CMAKE_PREFIX_PATH /usr/local/lib/python3.8/dist-packages)

But it does give a warning:

CMake Warning at /usr/local/lib/python3.8/dist-packages/torch/share/cmake/Torch/TorchConfig.cmake:22 (message):
  static library kineto_LIBRARY-NOTFOUND not found.
Call Stack (most recent call first):
  /usr/local/lib/python3.8/dist-packages/torch/share/cmake/Torch/TorchConfig.cmake:127 (append_torchlib_if_found)
  CMakeLists.txt:25 (find_package)

Hi @franciscon9k63, it doesn’t appear that the torchvision C++ headers get installed when torchvision is built, and I’m not sure how to build it with that enabled. This is how I build/install torchvision in the container’s dockerfile:

Perhaps you could clone that branch of the torchvision repo to get the headers?

hi, I meet a problem. I need your help.
machine: Jetson Xaview, Jetpack 5.1
torch version: torch-1.14.0a0+44dac51c.nv23.1

when executed "torchrun **** ",
Traceback (most recent call last):
File “/home/nvidia/.local/bin/torchrun”, line 5, in
from torch.distributed.run import main
File “/home/nvidia/.local/lib/python3.8/site-packages/torch/distributed/run.py”, line 383, in
from torch.distributed.elastic.rendezvous.utils import _parse_rendezvous_config
File “/home/nvidia/.local/lib/python3.8/site-packages/torch/distributed/elastic/rendezvous/init.py”, line 131, in
from .api import * # noqa: F403
File “/home/nvidia/.local/lib/python3.8/site-packages/torch/distributed/elastic/rendezvous/api.py”, line 10, in
from torch.distributed import Store
ImportError: cannot import name ‘Store’ from ‘torch.distributed’ (/home/nvidia/.local/lib/python3.8/site-packages/torch/distributed/init.py)

Hi @Kic1101, these wheels were built with USE_DISTRIBUTED disabled, so they don’t support torch.distributed. If you need that, you would need to rebuild PyTorch from source.

Thanks that was it. I followed the instructions from : GitHub - pytorch/vision: Datasets, Transforms and Models specific to Computer Vision

I did the following

ARG LIBTORCH_PATH=/usr/local/lib/python3.8/dist-packages/torch/

RUN git clone https://github.com/pytorch/vision torchvision && \
    cd torchvision && \
    git checkout ${TORCHVISION_VERSION} && \
    mkdir build && \
    cd build && \
    cmake -DCMAKE_PREFIX_PATH=${LIBTORCH_PATH}-DWITH_CUDA=on .. && \
    make -j$(nproc) && \
    sudo make install && \
    rm -rf /torchvision

Thanks @franciscon9k63, that’s good to know how you built it with that enabled 👍

It explains the matter. Thanks a lot.

I got problems while installing torchvision. I installed torch 1.8 and wanted to install torchvision 0.9.0, but the result is:

Finished processing dependencies for torchvision==0.9.0a0+01dfa8e
nvidia@ubuntu:~/torchvision$ python3
Python 3.6.9 (default, Nov 25 2022, 14:10:45) 
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> import torchvision
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/nvidia/torchvision/torchvision/__init__.py", line 7, in <module>
    from torchvision import datasets
  File "/home/nvidia/torchvision/torchvision/datasets/__init__.py", line 1, in <module>
    from .lsun import LSUN, LSUNClass
  File "/home/nvidia/torchvision/torchvision/datasets/lsun.py", line 2, in <module>
    from PIL import Image
  File "<frozen importlib._bootstrap>", line 971, in _find_and_load
  File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 656, in _load_unlocked
  File "<frozen importlib._bootstrap>", line 626, in _load_backward_compatible
  File "/home/nvidia/.local/lib/python3.6/site-packages/Pillow-9.4.0-py3.6-linux-aarch64.egg/PIL/Image.py", line 59, in <module>
  File "<frozen importlib._bootstrap>", line 971, in _find_and_load
  File "<frozen importlib._bootstrap>", line 951, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 894, in _find_spec
  File "<frozen importlib._bootstrap_external>", line 1157, in find_spec
  File "<frozen importlib._bootstrap_external>", line 1131, in _get_spec
  File "<frozen importlib._bootstrap_external>", line 1112, in _legacy_get_spec
  File "<frozen importlib._bootstrap>", line 441, in spec_from_loader
  File "<frozen importlib._bootstrap_external>", line 544, in spec_from_file_location
  File "/home/nvidia/.local/lib/python3.6/site-packages/Pillow-9.4.0-py3.6-linux-aarch64.egg/PIL/_deprecate.py", line 1
SyntaxError: future feature annotations is not defined

the strange thing: when I want to install a different version of torch, it installs everytime torch 1.8.0. also when I delete and uninstall the package.
Does anyone knows what I can do?

Please try running pip3 install 'pillow<9'

Try double-checking the URL of the wheel that you downloaded. Perhaps the wheel is for PyTorch 1.8. Each version has a different URL.

1 Like

I see Jetpack 5.0 comes with Python3.8 and PyTorch v1.12.0.
I’ve installed python3.11 on my Jetson (Xavier), is there any chance of getting PyTorch to work in python3.11? Or should I just stick with python3.8 which after all is provided with Jetpack 5.0

@Spange I’m not sure if Python 3.11 is officially supported/test in PyTorch yet (especially for aarch64+CUDA), but I did find this thread about it:

We only build PyTorch wheels for the default version of Python that comes with Ubuntu (and on JetPack 5.0, that’s Python 3.8). However you could try building it yourself from source.

1 Like

Heads up that an update is needed as newer PyTorch pip wheels are available, such as for JetPack SDK 5.1 here:

https://developer.download.nvidia.com/compute/redist/jp/v51/pytorch/

Looks like everything is still Python 3.8 while 3.11.2 is the latest stable version.

Thank you, @dusty_nv !

Thanks @adam-erickson, yes new PyTorch wheels will now be on that server, and I’ve posted a link to the official docs at the top of this thread: https://docs.nvidia.com/deeplearning/frameworks/install-pytorch-jetson-platform/index.html

Yes, we distribute the PyTorch wheels for the default version of Python that comes with Ubuntu, and for Ubuntu 20.04 (JetPack 5.0/5.1) that is Python 3.8. For Python 3.11, you would need to build PyTorch from source.

maybe you have to reinstall jetpack with the SDK manager

But I installed torch-1.14.0a0+44dac51c.nv23.02 and torch-2.0.0a0+8aa34602.nv23.03, it are incompatible with torchvision.

I have tried 0.15.1, 0.14.1, 0.14.0.

Hi @cory_weng, have you tried building torchvision from source like shown under the Installation section of the first post of this topic, as opposed to installing torchvision from pip?

I am trying to compile pytorch v1.13.1 from scratch for the ORIN (initially on 5.0.2), so I can enable distributed and I patched utils/ccp_extension.py per the instructions at the top of the thread. But i am running into a CMake error: Unknown CUDA Architecture Name 7.2, 8.7 in CUDA_SELECT_NVCC_ARCH_FLAGS from cuda_select_nvcc_arch_flags.

Any help would be greatly appreciated. I get why the distributed option is not a common option when using the ORIN but it sure would be nice if there was a version that ran on 5.1 that did have it enabled.

Thanks, and sorry if this has been answered in the thread.

bb