PyTorch for Jetson

you may also try using docker containers with preinstalled pytorch from NVIDIA L4T PyTorch | NVIDIA NGC so that there will be no need to build it from [github] sources
@neuezeal you may also try these steps

wget https://nvidia.box.com/shared/static/9eptse6jyly1ggt9axbja2yrmj6pbarc.whl 
mv 9eptse6jyly1ggt9axbja2yrmj6pbarc.whl torch-1.6.0-cp36-cp36m-linux_aarch64.whl
sudo apt-get install python3-pip libopenblas-base libopenmpi-dev 
pip3 install Cython
pip3 install numpy torch-1.6.0-cp36-cp36m-linux_aarch64.whl
 sudo apt-get install libjpeg-dev zlib1g-dev
  git clone --branch v0.7.0 https://github.com/pytorch/vision torchvision
  cd torchvision
   export BUILD_VERSION=0.7.0
 sudo python3 setup.py install
cd ../
1 Like

Hello,

(env_name1) jetson8@jetson8-desktop:~/coding$ pip3 install numpy torch-1.5.0-cp36-cp36m-linux_aarch64.whl
Requirement already satisfied: numpy in /home/jetson8/.local/lib/python3.6/site-packages (1.19.2)
Processing ./torch-1.5.0-cp36-cp36m-linux_aarch64.whl
Installing collected packages: torch
Attempting uninstall: torch
Found existing installation: torch 1.4.0
Uninstalling torch-1.4.0:
Successfully uninstalled torch-1.4.0
Successfully installed torch-1.4.0
(env_name1) jetson8@jetson8-desktop:~/coding$ cd torchvision/
(env_name1) jetson8@jetson8-desktop:~/coding/torchvision$ ls
cmake CODE_OF_CONDUCT.md examples LICENSE packaging references setup.py torchvision travis-scripts
CMakeLists.txt docs hubconf.py MANIFEST.in README.rst setup.cfg test tox.ini
(env_name1) jetson8@jetson8-desktop:~/coding/torchvision$ sudo python3 setup.py install
Traceback (most recent call last):
File ā€œsetup.pyā€, line 13, in
import torch
ModuleNotFoundError: No module named ā€˜torchā€™

What am I supposed to do?
Thank you.

@neuezeal
at some point it is required to get out from a build folder before using torch: ../
also you may try to repeat steps from the post before in exact manner;
from the first step to tthe last one.
which will be the output if you do?
Is there any particular reaon why you are trying to install torch-1.4.0 , but not 1-6.0 as it was proposed?
Also I would suggest to uninstall any existing torch package before trying to install it again with pip3

1 Like

Hello,

I hope to run this project.
it requires torch version 1.5.0

Thank you.

I did not try this project;
Maybe other folks will add.
However, I noticed that you were installing torch 1.4, not 1.5

1 Like

Ah, it is the instructions which are confusing!

The installation instructions are for v1.4.0, and while I did substitute 1.6.0 in the filename, I did not change the URL. The instructions say ā€œSubstitute the URL/filenamesā€ which for me sounded like ā€œURL or file namesā€, whereas you meant URL and filename!

Sorry, that was my mistake. But perhaps since the post title is about v1.6.0, the Installation instructions should probably include the 1.6.0 download/installation instructions by default!

Ah sorry about that, I have just updated the instructions to make it more clear.

I used the instruction for PyTorch 1.3, JetPack 4.4

But I canā€™t load PyTorch, I receive this error message:
ā€¦
from torch._C import *
ImportError: cannot open shared object file: No such file or directory.

Someone else get this message? And can give me tips?
Thank

For Jetpack 4.4 Production release only Pytorch 1.6 is supported.

1 Like

thanks

Thanks again.
I try to use PyTorch 1.6 ,
But, I got a different error:
OSError: libcurand.so.10: cannot open shared object file: No such file or directory

To verified the cuda 10 is in path I input in terminal:
echo $LD_LIBRARY_PATH
/usr/local/cuda-10.0/lib64:

Can you help me?

Hi @abraham.pelz, can you check your L4T version with cat /etc/nv_tegra_release and post the results of that command here? It seems maybe you installed a PyTorch wheel that wasnā€™t build for your version of JetPack-L4T.

Before installing another PyTorch, I also recommend running these commands to clear out previous install:

$ pip3 uninstall torch
$ sudo pip3 uninstall torch

Also if you continue to have issues installing PyTorch, I recommend using the l4t-pytorch container which already has it installed. You will want to use the tag of the container that matches your L4T version.

could you paste to here the output of execution of the steps below, please?

sudo apt install mlocate
sudo updatedb
locate libcurand.so

If the execution is from the docker container, then the entire cuda folder might need to be copied into the withing of the container

Thanks,
The command:
cat /etc/nv_tegra_release
Output:

R32 (release), REVISION: 3.1, GCID: 18186506, BOARD: t186ref, EABI: aarch64, DATE: Tue Dec 10 07:03:07 UTC 2019

It is proposed that the JP 4.4Ga release is used with torch 1.6
earlier JP releases will support earlier torch releases, as it seems to me

OK, so you are on JetPack 4.3 (R32.3.1). For that version, you can try these wheels:

JetPack 4.2 / 4.3

thanks

Hi, thanks a lot for your reply! Iā€™m using TX2 with Jatpack 4.4, L4T 32.4.2. I am actually downloading pytorch 1.4 which is for 32.4.2 rather than pytorch 1.6, to be specific, I use the command line in the instruction guide, which I attached below
wget https://nvidia.box.com/shared/static/1v2cc4ro6zvsbu0p8h6qcuaqco1qcsif.whl -O torch-1.4.0-cp27-cp27mu-linux_aarch64.whl
sudo apt-get install libopenblas-base libopenmpi-dev
pip install future torch-1.4.0-cp27-cp27mu-linux_aarch64.whl


However, Iā€™m still getting that error like shown in the screenshot, is there anything wrong I did with the command line?
Thanks a lot!

Hi, I am building PyTorch version 1.6 from source in a virtual Python 3.8 environment, but around 86% of the build I receive the following error:

CMake Error at torch_cuda_generated_Unique.cu.o.Release.cmake:281 (message):
Error generating file
/opt/pytorch/build/caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/./torch_cuda_generated_Unique.cu.o

caffe2/CMakeFiles/torch_cuda.dir/build.make:2520: recipe for target 'caffe2/CMakeFiles/torch_cuda.dir//aten/src/ATen/native/cuda/torch_cuda_generated_Unique.cu.oā€™ failed
make[2]: *** [caffe2/CMakeFiles/torch_cuda.dir/
/aten/src/ATen/native/cuda/torch_cuda_generated_Unique.cu.o] Error 1

CMake is version 3.10.2

Does anyone have a clue or a pointer for me? Thanks in advance!

Hi @jiangwei.wang, the error withnvidia-l4t-bootloader package that you are encountering is unrelated to PyTorch. Are you using the TX2 devkit, or TX2 on a different carrier board? See this thread for more info:

https://forums.developer.nvidia.com/t/ota-update-to-jetpack-4-4-dp-fails-error-processing-package-nvidia-l4t-bootloader/126075