PyTorch for Jetson

Thank you @dusty_nv.
The problem with installing Pytorch and torchvision and getting error OSError :libcurand.so.8 got resolved after making sure the two lines:

deb https://repo.download.nvidia.com/jetson/common r35.2 main

deb https://repo.download.nvidia.com/jetson/t234 r35.2 main

are present in the /etc/apt/sources.list.d/nvidia-l4t-apt-source.list file and reinstalling jetpack package using
sudo apt install nvidia-jetpack. Then following the instructions for wheel packages at the top of this post.

Is there a way to make it work with Python 3.11?
Thanks

Hi @bujna94, I believe that support for Python 3.11 was added recently with the PyTorch 2.1 release:

You would need to build it from source against Python 3.11

I see. Yes on PC it works with 3.11, but not on Jetson (at least with provided wheels). I never tried to build it from a source but seems like that would be the answer for me. Thanks

Hi, do you have torch and torchvision for python3.7? I tried installing torch 1.8.0 and torchvision 0.9.0. The installation went well but when I tried the “torch.cuda.is_available()” it returned False.

@danish.shukor no, you would need to build PyTorch from source against Python 3.7. There are compilation instructions found at the top of this thread.

@dusty_nv I’m using JetPack 4.6 and Ubuntu 18.04 on Jetson Xavier NX. Do I have to apply the patch “PyTorch 1.8 - pytorch-1.8-jetpack-4.4.1.patch”? If yes, how do I apply the patch? Thank you.

@danish.shukor the patches depend on the version of PyTorch you are building, and less so the version of JetPack. I go through and apply them by hand to the PyTorch source tree that I cloned to avoid any conflicts.

@dusty_nv I’m trying to build Pytorch 1.8. What does this Apply Patch instruction tell? Do I have to do anything here after doing $ git clone --recursive --branch v1.8.0 http://github.com/pytorch/pytorch and $ cd pytorch or just skip to Set Build Options?

@danish.shukor IIRC I made those patches with git diff, so you can try applying them with git apply. However I typically go through by hand and apply them to the PyTorch source manually (in text editor) to double-check that nothing has changed. That 1.8 patch is longer though, so you can try applying it with git first.

I see. I’ve done it. Thank you so much for your time! May you have good days ahead!

Hello,
When I gave the command “python3 setup.py install --user”, i got this for about an hour and it still looping now:

Building wheel torchvision-0.8.1
PNG found: True
libpng version: 1.6.34
Building torchvision with PNG image support
libpng include path: /usr/include/libpng16
Running build on conda-build: False
Running build on conda: True
JPEG found: True
Building torchvision with JPEG image support
FFmpeg found: True
ffmpeg include path: /usr/include
ffmpeg library_dir: /usr/lib
running install
running bdist_egg
running egg_info
writing torchvision.egg-info/PKG-INFO
writing dependency_links to torchvision.egg-info/dependency_links.txt
writing requirements to torchvision.egg-info/requires.txt
writing top-level names to torchvision.egg-info/top_level.txt
/home/lab/mambaforge/envs/yolo/lib/python3.6/site-packages/torch/utils/cpp_extension.py:339: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja… Falling back to using the slow distutils backend.
warnings.warn(msg.format(‘we could not find ninja.’))
reading manifest file ‘torchvision.egg-info/SOURCES.txt’
reading manifest template ‘MANIFEST.in’
warning: no previously-included files matching ‘pycache’ found under directory ‘
warning: no previously-included files matching '
.py[co]’ found under directory ‘*’
adding license file ‘LICENSE’
writing manifest file ‘torchvision.egg-info/SOURCES.txt’
installing library code to build/bdist.linux-aarch64/egg
running install_lib
running build_py
copying torchvision/version.py → build/lib.linux-aarch64-3.6/torchvision
running build_ext
building ‘torchvision.image’ extension
gcc -pthread -B /home/lab/mambaforge/envs/yolo/compiler_compat -Wl,–sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DPNG_FOUND=1 -DJPEG_FOUND=1 -I/home/lab/torchvision/torchvision/csrc -I/usr/include/libpng16 -I/home/lab/torchvision/torchvision/csrc -I/home/lab/mambaforge/envs/yolo/lib/python3.6/site-packages/torch/include -I/home/lab/mambaforge/envs/yolo/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/lab/mambaforge/envs/yolo/lib/python3.6/site-packages/torch/include/TH -I/home/lab/mambaforge/envs/yolo/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/home/lab/torchvision/torchvision/csrc/cpu/image -I/home/lab/mambaforge/envs/yolo/lib/python3.6/site-packages/torch/include -I/home/lab/mambaforge/envs/yolo/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/lab/mambaforge/envs/yolo/lib/python3.6/site-packages/torch/include/TH -I/home/lab/mambaforge/envs/yolo/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.2/include -I/home/lab/mambaforge/envs/yolo/include/python3.6m -c /home/lab/torchvision/torchvision/csrc/cpu/image/read_write_file_cpu.cpp -o build/temp.linux-aarch64-3.6/home/lab/torchvision/torchvision/csrc/cpu/image/read_write_file_cpu.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=image -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
In file included from /home/lab/mambaforge/envs/yolo/lib/python3.6/site-packages/torch/include/ATen/Parallel.h:149:0,
from /home/lab/mambaforge/envs/yolo/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
from /home/lab/mambaforge/envs/yolo/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
from /home/lab/mambaforge/envs/yolo/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
from /home/lab/mambaforge/envs/yolo/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:12,
from /home/lab/mambaforge/envs/yolo/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/torch.h:3,
from /home/lab/torchvision/torchvision/csrc/cpu/image/read_write_file_cpu.h:5,
from /home/lab/torchvision/torchvision/csrc/cpu/image/read_write_file_cpu.cpp:1:
/home/lab/mambaforge/envs/yolo/lib/python3.6/site-packages/torch/include/ATen/ParallelOpenMP.h:84:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
#pragma omp parallel for if ((end - begin) >= grain_size)

:~/Code/ultralytics/venv-test$ pip3 install numpy torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl
ERROR: torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl is not a supported wheel on this platform.
How do i solve this?
I’m using an orin dev kit board cuda 12.1

@f.mainstone what does pip3 --version show? That PyTorch wheel is for Python 3.8 and was built against CUDA 11.4.

hey dusty, i’ve been round and round in circles with this one, think i realize now what i’m trying to do is actually not possible. i maybe stupidly assumed that if there is a cuda 11.8 / cuda 12.x then there would be things like tensorRT to match this for the jetson platforms, however I’m realizing now this is ver much the case and actually there are only very specific combinations of cuda, pytorch, torchvision, tensorrt etc…

i’m trying to use yolov8 and i need cuda 11.8 to work with pytorch as 11.4 isn’t supported?

i’m no expert in all of this so it’s a lot of following things online and most of the stuff is written for x86, see my post:

i’m crossing my fingers that jetpack 6 solves everything - if not we’ll have to implement another solution in our business, such a shame as i think the jetson setup would otherwise be perfect, compatibility it just a nightmare

I’ve got everything working with:
cuda 11.8
pythorch 2.0.0+nv23.05
torchvision 0.15.1
ultralytics 8.0.218

just cannot get tensorRT to join the party :(

thanks for your help and support regardless :)

@f.mainstone yes JetPack 6 should get you new versions for CUDA/cuDNN/TensorRT/ect. I know that other folks have gotten YOLOv8 running on Jetson Nano and JetPack 4, so I’m not sure that CUDA 11.8 is a strict requirement for that or not. Sometimes packages say that, but can be run with older Python/ect. Or if you could just recompile PyTorch against CUDA 11.8. And does TensorRT actually make an error, or are you just going by the compatibility matrix? Regardless, yea JetPack 6 should be out soon.

1 Like

Hi, is there any specific version of torchvision compatible with pytorch2.1.0?
I installed JetPack 5.1.2 and it seems only pytorch v2.1.0 is available

i am getting this error after installing pytorch 1.11 in jetpack 5 and verification

import torch
torch.cuda.is_available()

torch._C._cuda_init()
RuntimeError: CUDA unknown error this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero.

Hi @iris980125, for PyTorch 2.1, I am using torchvision 0.16

1 Like

@csoham96 are you able to build/run CUDA deviceQuery sample to confirm GPU is working?

cd /usr/local/cuda/samples/1_Utilities/deviceQuery
sudo make
./deviceQuery
1 Like