PyTorch for Jetson

Hi @alireza.rf, the JetPack 4.4 production release (L4T R32.4.3) only supports PyTorch 1.6.0 (and newer), because cuDNN had removed some previously-deprecated functions that required PyTorch to be updated. The same goes for JetPack 4.4.1 (L4T R32.4.4) - it needs PyTorch 1.6.0 or newer.

1 Like

Hello,

I’m using virtual environment(miniforge) on jetson xavier nx.
I have met following error.

(yolov5_env) hodu@hodu-desktop:~$ pip3 install numpy torch-1.7.0-cp36-cp36m-linux_aarch64.whl
ERROR: torch-1.7.0-cp36-cp36m-linux_aarch64.whl is not a supported wheel on this platform.

What am I supposed to do?

Thank you.

Hi @forumuser, is hodu-desktop machine your Jetson, or is it a PC?

If you are indeed logged into your Jetson, can you confirm first that you’re able to install the wheel successfully outside of virtualenv?

1 Like

Hello,

hodu-desktop is a Jetson

I can install the wheel successfully outside of virtualenv.

Thank you.

Hmm ok - I haven’t used virtualenv myself before (instead I use containers), but I believe others on this thread have been able to. Can you check pip3 --version or python3 --version from within your virtualenv to confirm that it’s indeed a Python 3.6 environment?

1 Like

Hello,

When I make virtualenv, I guess I specify python version3.7.

sooner or later, let me know you.

Thank you.

OK thanks, yea it would need to be created as a Python 3.6 environment to work with those wheels.

2 Likes

hi,I created a python 3.6 environment with miniforge-pypypy3, in which Python 3.6 - torch-1.6.0-cp36-cp36m-linux was installed_ Aarch64.whl, then encountered a problem installing pytorch v1.6 - torch vision v0.7.0,

~/torchvision$ sudo python3 setup.py install
Traceback (most recent call last):
File “setup.py”, line 13, in
import torch
ModuleNotFoundError: No module named ‘torch’

but I already have Python installed and can import it

~/torchvision$ python
Python 3.6.11 | packaged by conda-forge | (default, Nov 27 2020, 18:40:28)
[GCC 9.3.0] on linux
Type “help”, “copyright”, “credits” or “license” for more information.

import torch
print(torch.version)
1.6.0

How can I solve itthanks

Hi @1754387338, I haven’t used miniforge before, but it would appear that torchvision through python3 isn’t able to find the torch package you installed. If you run python3 are you able to import torch?

If not, you might want to symbolically link python3 to python so that the torchvision install script can find it.

hi,

I tried to use:
pip3 install torch-1.4.0-cp36-cp36m-linux_aarch64.whl
And
pip3 install torch-1.7.0-cp36-cp36m-linux_aarch64.whl

I have downloaded the two whl files. But every time it went wrong.
The error is displayed as follows:
Defaulting to user installation because normal site-packages is not writeable
ERROR: torch-1.6.0-cp36-cp36m-linux_aarch64.whl is not a supported wheel on this platform.

In addition, I am using python3.8, because I want to deploy yolov5 on jetson nano, which requires python3.8

Why don’t these containers include OpenCV yet? Seriously what’s the point of this. Your docker system is so useless I’ve spent a week on headaches of missling libraries that should have been included by default. All these dumb methods to pipe over libraries from my main os
 Does anyone who actually do ML work make these?

Hi @industrialacc0, we try to keep the size down on the l4t-pytorch and l4t-tensorflow containers by not installing extra libraries into those. You can use these base containers as a starting point and create your own containers from them (e.g. via Dockerfiles)

You can see the Dockerfile commands for installing the version of OpenCV that comes with JetPack here, so you needn’t mount it from the host if you don’t want to:

https://github.com/NVIDIA-AI-IOT/jetbot/blob/cbf6f1b5a3285ad3bbb18ec552ed79846d1e2529/docker/base/Dockerfile#L47

You can also change/rebuild these base containers yourself, as the base Dockerfiles and build scripts for the containers are open-sourced here:

https://github.com/dusty-nv/jetson-containers

And yes, I do a lot of ML work in PyTorch but have not needed OpenCV as a frequent dependency, and it is a large library. I’ll consider adding it to the larger l4t-ml container, but perhaps folks may desire their own customized/newer version of OpenCV, which having a pre-existing version could complicate the install of.

Hi @zlbzailushang, these PyTorch wheels were built for Python 3.6, so they wouldn’t work on Python 3.8. However some other users on this topic and in the forums have been able to rebuild PyTorch for Python 3.8. Please see this post for more info:

thanks a lot.

need VPN to use wget to download

Thanks I was very frustrated when I wrote this I didn’t expect a useful solution so I just built OpenCV from scratch before. Your solution will probably work a lot better. In another topic your colleague recommended to change csv files and stuff

On a sidenote, I would expect most people to use the Jetsons for Machine vision inference devices (at least I do). Having pure ML containers without the libraries most commonly used in combination with them such as seems like a bit of a waste but that’s up to you guys I guess.

In any case thank you!

@industrialacc0 thanks for your feedback and following up. In the next version of the l4t-ml container, I have added OpenCV 4.4.1 to it (the one that comes with JetPack). l4t-ml is the big container with PyTorch/TensorFlow/JupyterLab/scipy/sklearn/ect, so that can provide users a good starting point.

@dusty_nv , something seems wrong with the prebuilt PyTorch v1.7.0. To reproduce on AGX Xavier:

Let’s start with the official Docker image:
docker run --rm -it --gpus all nvcr.io/nvidia/l4t-pytorch:r32.4.4-pth1.6-py3

Now, inside the Docker container, we do:
wget https://nvidia.box.com/shared/static/wa34qwrwtk9njtyarwt5nvo6imenfy26.whl -O torch-1.7.0-cp36-cp36m-linux_aarch64.whl

pip3 install torch-1.7.0-cp36-cp36m-linux_aarch64.whl

python3

import torch
import torch.nn.functional as F
x = torch.tensor([12.345])
print(x)
print(F.softmax(x))

The expected results are:
12.345
1.

The actual results are:
12.
nan

The above issue only happens to CPU tensors. If we do x = x.cuda() before printing, then we’ll see the correct results. I suspect something is wrong with the CPU library.

Also, PyTorch v1.6.0 doesn’t have the above problem.

Hi @yin, it seems this issue may be related to this PyTorch 1.7.0 issue: https://github.com/pytorch/pytorch/issues/49157

Not sure if it has been addressed in 1.7.1 or not. I don’t think it is particular to how I built the wheel. For now, you may want to stick with 1.6.0 or try building the wheel for 1.7.1 to see if that fixes it (although I would expect that to be indicated in the PyTorch issue above at some point)

1 Like

Thanks for the prompt reply @dusty_nv.

I built v1.7.1 from source following your instructions, and encountered the same issue.

This issue is specific to Jetson. It doesn’t happen on the desktop version of PyTorch downloaded from PyTorch’s conda channel.

David