PyTorch for Jetson

Hi @Victorine, I’m not sure what your baidu link points to, but you can download a wheel from the first post in this topic and install it with the instructions from the first post. You should pick a wheel that supports the version of JetPack you are running.

I can’t take the link from the first post because I do not have access to nvidia.box.com
And my problem is that I do not know the command to install.

Thank you for your respond

Ah ok, gotcha - try running this after you download the wheel:

sudo apt-get install python3-pip libopenblas-base libopenmpi-dev 
pip3 install Cython
pip3 install numpy
pip3 install <PATH-TO-TORCH-WHEEL>.whl

This is totally awesome!

thank you, I am going to try this to day !

thanks you very much, It’s working

HI! My jetson nano has Jetpack(4.5.1)[L4T 32.5.1], cuda v 10.2 .So how should I install
the correct Torch and which version of Torch should I install to use cuda for my torch.TKU

Hi @karl.self.shen, you can use any of these wheels on JetPack 4.5.1:

You can also use l4t-pytorch container from nvcr.io/nvidia/l4t-pytorch:r32.5.0-pth1.7-py3

Hi all, the wheel for PyTorch 1.9 has been posted:

TKU a lot! If I use the container , how can i use the cv2 correctly with using a camera. I meet a lot of bugs especially in QT when I installing python-packs in this docker.

Hi everyone, I would like to use qnnpack for quantization in PyTorch on the Jetson Nano. Is there any build available with support for qnnpack? Preferably Pytorch >= 1.7.0.

If you use the latest l4t-ml container, it has the OpenCV installed from JetPack (i.e. the OpenCV that has GStreamer already enabled)

Hi @strongespresso, some time back I was getting build errors on aarch64 for QNNPACK, so I had to disable it. You can try building PyTorch with it enabled to see if it compiles.

Hello, Will this PyTorch version v1.0.0 wheel file (for python3.6) provided above will run of Jetpack 3.3?. Thanks.

I compiled it from source with

USE_NCCL=0
USE_DISTRIBUTED=0
USE_QNNPACK=0
USE_PYTORCH_QNNPACK=1
TORCH_CUDA_ARCH_LIST="5.3;6.2;7.2"

and I can confirm that the compilation succeeded, and QNNPACK is now visible as one of the supported backends for quantization. :-) It’s available here: torch-1.8.0-cp36-cp36m-linux_aarch64.whl. Thanks for the help!

Hi @Vebby, I don’t believe so, since JetPack 3.3 uses a different version of CUDA than that wheel was built against (for JetPack 4.2/4.3)

These PyTorch wheels are only portable between JetPack version that share the same version of CUDA and have similar version of cuDNN.

Hello again,
Is there any possible way that you know to install PyTorch v1.0 on Jetpack 3.3.3?. Thank you for your response.

I believe you would need to build PyTorch from source on JetPack 3.3.3

1 Like

Cannot install torch 1.9.0 on Jetson nano with Jetpack 4.5.1.
I’m doing the same sequence of command recommended here (they worked just fine for torch 1.8.0 and torchvision 0.9.0), but after command ‘pip3 install numpy torch-1.9.0-cp36-cp36m-linux_aarch64.whl’ I get message ‘… successfully installed torch 1.8.0’. Please note: 1.8.0, not 1.9.0
Is it because Jetpack 4.5.1 have preinstalled CUDA 10.2 and torch 1.9.0 requires CUDA 11.1?

What are recommendations to install torch 1.9.0? like maybe, the easiest way to update CUDA in Jetpack 4.5.1? is release date of updated Jetpack with preinstalled CUDA 11.1 known?
Or am I just missing something?

Thanks in advance!!!

found the reason for myself, I was stupid enough and used wrong link for wget command.
the correct one for 1.9.0 is wget https://nvidia.box.com/shared/static/2cqb9jwnncv9iii4u60vgd362x4rnkik.whl -O torch-1.9.0-cp36-cp36m-linux_aarch64.whl

small tip, just in case it’ll help someone: you can crude check correctness of the version of the wheel by opening it via file manager and examining the version pointed there