Hi @AK51, see the very first post in this topic and it has instructions for installing PyTorch and torchvision. Or you can use the l4t-pytorch container which already has these pre-installed.
@dusty_nv thanks so much for providing the links. I think building from source is the only option i have left. I will go explore this route.
Hi,i dont know can python3.8 works?
the official point out python3.6
Hi, Iâm trying to install PyTorch in an Jetson TX2-NX following the installation instructions described at the top of this thread. I can install it but when I import it in Python 3, I get the following error:
Python 3.6.9 (default, Dec 8 2021, 21:08:43)
[GCC 8.4.0] on linux
Type âhelpâ, âcopyrightâ, âcreditsâ or âlicenseâ for more information.
import torch
Traceback (most recent call last):
File ââ, line 1, in
File â/home/tx2-nx/.local/lib/python3.6/site-packages/torch/init.pyâ, line 195, in
_load_global_deps()
File â/home/tx2-nx/.local/lib/python3.6/site-packages/torch/init.pyâ, line 148, in _load_global_deps
ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL)
File â/usr/lib/python3.6/ctypes/init.pyâ, line 348, in init
self._handle = _dlopen(self._name, mode)
OSError: libcudnn.so.8: cannot open shared object file: No such file or directory
I tried different version but I always reach the same point. I can not find solutions for this problem online.
Any help on how to successfully install Pytorch and torchvision in the TX2-NX would be very appreciated.
Hi @NoobKing, I only build the wheels for Python 3.6, since that is the default version of Python that comes with the version of Ubuntu in JetPack. To build PyTorch for Python 3.8, you can follow the Build from Source
instructions at the top of this thread. Thereâs also a thread about it here:
Hi @user137664, which version of JetPack-L4T do you have on your Jetson and which PyTorch wheel did you install? (you can check the L4T version with cat /etc/nv_tegra_release
)
Also, are you sure that SDK Manager succesffully installed cuDNN when you flashed your TX2-NX device? For cuDNN 8 it should look similar to:
nvidia@xavier-32g:/media/nvidia/NVME$ ls -ll /usr/lib/aarch64-linux-gnu/libcudnn*
lrwxrwxrwx 1 root root 39 Feb 22 2021 /usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so -> /etc/alternatives/libcudnn_adv_infer_so
lrwxrwxrwx 1 root root 27 May 23 2020 /usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8 -> libcudnn_adv_infer.so.8.0.0
-rw-r--r-- 1 root root 98960352 May 23 2020 /usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8.0.0
lrwxrwxrwx 1 root root 42 Feb 22 2021 /usr/lib/aarch64-linux-gnu/libcudnn_adv_infer_static.a -> /etc/alternatives/libcudnn_adv_infer_stlib
-rw-r--r-- 1 root root 102563474 May 23 2020 /usr/lib/aarch64-linux-gnu/libcudnn_adv_infer_static_v8.a
lrwxrwxrwx 1 root root 39 Feb 22 2021 /usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so -> /etc/alternatives/libcudnn_adv_train_so
lrwxrwxrwx 1 root root 27 May 23 2020 /usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8 -> libcudnn_adv_train.so.8.0.0
-rw-r--r-- 1 root root 52212160 May 23 2020 /usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8.0.0
lrwxrwxrwx 1 root root 42 Feb 22 2021 /usr/lib/aarch64-linux-gnu/libcudnn_adv_train_static.a -> /etc/alternatives/libcudnn_adv_train_stlib
-rw-r--r-- 1 root root 56776978 May 23 2020 /usr/lib/aarch64-linux-gnu/libcudnn_adv_train_static_v8.a
lrwxrwxrwx 1 root root 39 Feb 22 2021 /usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so -> /etc/alternatives/libcudnn_cnn_infer_so
lrwxrwxrwx 1 root root 27 May 23 2020 /usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8 -> libcudnn_cnn_infer.so.8.0.0
-rw-r--r-- 1 root root 476410272 May 23 2020 /usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8.0.0
lrwxrwxrwx 1 root root 42 Feb 22 2021 /usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer_static.a -> /etc/alternatives/libcudnn_cnn_infer_stlib
-rw-r--r-- 1 root root 405719302 May 23 2020 /usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer_static_v8.a
lrwxrwxrwx 1 root root 39 Feb 22 2021 /usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so -> /etc/alternatives/libcudnn_cnn_train_so
lrwxrwxrwx 1 root root 27 May 23 2020 /usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8 -> libcudnn_cnn_train.so.8.0.0
-rw-r--r-- 1 root root 39971944 May 23 2020 /usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8.0.0
lrwxrwxrwx 1 root root 42 Feb 22 2021 /usr/lib/aarch64-linux-gnu/libcudnn_cnn_train_static.a -> /etc/alternatives/libcudnn_cnn_train_stlib
-rw-r--r-- 1 root root 33150644 May 23 2020 /usr/lib/aarch64-linux-gnu/libcudnn_cnn_train_static_v8.a
lrwxrwxrwx 1 root root 39 Feb 22 2021 /usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so -> /etc/alternatives/libcudnn_ops_infer_so
lrwxrwxrwx 1 root root 27 May 23 2020 /usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8 -> libcudnn_ops_infer.so.8.0.0
-rw-r--r-- 1 root root 108543032 May 23 2020 /usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8.0.0
lrwxrwxrwx 1 root root 42 Feb 22 2021 /usr/lib/aarch64-linux-gnu/libcudnn_ops_infer_static.a -> /etc/alternatives/libcudnn_ops_infer_stlib
-rw-r--r-- 1 root root 44072996 May 23 2020 /usr/lib/aarch64-linux-gnu/libcudnn_ops_infer_static_v8.a
lrwxrwxrwx 1 root root 39 Feb 22 2021 /usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so -> /etc/alternatives/libcudnn_ops_train_so
lrwxrwxrwx 1 root root 27 May 23 2020 /usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8 -> libcudnn_ops_train.so.8.0.0
-rw-r--r-- 1 root root 27284344 May 23 2020 /usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8.0.0
lrwxrwxrwx 1 root root 42 Feb 22 2021 /usr/lib/aarch64-linux-gnu/libcudnn_ops_train_static.a -> /etc/alternatives/libcudnn_ops_train_stlib
-rw-r--r-- 1 root root 27497822 May 23 2020 /usr/lib/aarch64-linux-gnu/libcudnn_ops_train_static_v8.a
lrwxrwxrwx 1 root root 29 Feb 22 2021 /usr/lib/aarch64-linux-gnu/libcudnn.so -> /etc/alternatives/libcudnn_so
lrwxrwxrwx 1 root root 17 May 23 2020 /usr/lib/aarch64-linux-gnu/libcudnn.so.8 -> libcudnn.so.8.0.0
-rw-r--r-- 1 root root 182760 May 23 2020 /usr/lib/aarch64-linux-gnu/libcudnn.so.8.0.0
lrwxrwxrwx 1 root root 32 Feb 22 2021 /usr/lib/aarch64-linux-gnu/libcudnn_static.a -> /etc/alternatives/libcudnn_stlib
-rw-r--r-- 1 root root 915503964 May 23 2020 /usr/lib/aarch64-linux-gnu/libcudnn_static_v8.a
Hi @dusty_nv , thank you for your quick response.
The JetPack-L4T version I have is R32.5.1. The Pytorch wheel I installed is 1.8.0.
I would say the SDK manager didnât install cuDNN when flashing the TX2-NX. Cuda 10.2 was installed without the SDK using:
sudo wget https://repo.download.nvidia.com/jetson/common/pool/main/c/cuda-toolkit-10-2/cuda-toolkit-10-2_10.2.460-1_arm64.deb
sudo apt install cuda-toolkit-10-2
I tried to install it later cuDNN and the file that cannot find appears to be located at /usr/local/cuda/lib64, but still it doesnât recognize it.
ls -ll /usr/lib/aarch64-linux-gnu/libcudnn* doesnât show anything.
Thank you for your help.
OK gotcha - yep it looks like you also need to install cuDNN then too (either via SDK Manager or apt/deb). The PyTorch wheel that you installed for L4T R32.5 should be fine once you have the prerequisite libraries installed too.
Python 3.8.0 (default, Dec 9 2021, 17:53:27)
[GCC 8.4.0] on linux
Type âhelpâ, âcopyrightâ, âcreditsâ or âlicenseâ for more information.
import torch
Traceback (most recent call last):
File ââ, line 1, in
ModuleNotFoundError: No module named âtorchâ
I followed the instructions on how to build the pytorch from source but does not work. After the last command (python3 setup.py bdist_wheel) , do we have to do anything else?
Thank you @dusty_nv for your answer. Just to clarify, is it a must to install cuda and cudnn from the SDK manager or can they be installed after the flashing process? In this case, what would be the correct installation to have a compatible version?
Thank you again!
You should then install the wheel that it builds. (I believe the wheel gets built to the pytorch/dist
directory)
With TX2, Iâve only installed CUDA with SDK Manager or with apt-get install nvidia-jetpack
. I havenât manually selected the deb like you did, so Iâm unsure exactly which package to pick (normally SDK Manager or apt takes care of that). Using the SDK Manager will help it to select the correct version of cuDNN that is compatible with the version of JetPack that you are using.
@dusty_nv hi, I followed the instructions and successfully installed torch=1.8. However, when I enter the python interactive environment and type âimport torchâ, it will just show âillegal instruction (core dumped)â. And when I tried to build pytorch from source, when I run âpython3 setup.py bdist_wheelâ, it will also show âillegal instruction (core dumped)â. Do you what may cause this issue? Thank you!
Hi @haonanwa, are you able to import numpy
? If not, please export OPENBLAS_CORETYPE=ARMV8
in your terminal first. See this post for more info:
https://github.com/numpy/numpy/issues/18131#issuecomment-755438271
Thanks. It works for me!
I have Jetpack 4.6 version in my Jetson nano. I would like to build the pytorch from source. But there are no suitable patch files available for Jetpack 4.6. Kindly help me to sort this out. @dusty_nv
Hi @user54392, the patches arenât really specific to the JetPack version, but rather the version of PyTorch. So if you are building PyTorch 1.10 for example, just try using pytorch-1.10-jetpack-4.5.1.patch
I met a trouble on installing Pytorch v1.8.0.
When I import torch, it will appear OSError: libcudnn.so.8: cannot open shared object file: No such file or directory.
But I have installed cudnn and chmod +x. I tried sudo ldconfig, but it didnât work. How can I solve this problemïŒ
Thanks!
Hi
Am using ânvcr.io/nvidia/l4t-pytorch:r32.6.1-pth1.9-py3â container . I had executed a program which needs GPU support inside that container , but it executed only with the CPU. Kindly let me know whether that container possess GPU access or do I need to make any changes to enable GPU support. @dusty_nv
Hi @user14194, which version of JetPack are you running and how did you install cuDNN? Was it with SDK Manager or an SD card image? It seems like it is looking for a different version.