PyTorch for Jetson

Hi haranbolt, can you check from an interactive Python terminal if you are able to import numpy and import numpy.core.multiarray? If so, it may be related to using virtualenv.

I also tried installing on the base installation of python3. But I still get the same error.

Are you able manually import numpy and numpy.core.multiarray from an I interactive Python shell? If not, you may need to reinstall numpy.

Hi,

Installed for python3 and in general it behaves well. However, when trying to get cornernet to work (https://github.com/princeton-vl/CornerNet-Lite.git) I get :

RuntimeError: cuda runtime error (7) : too many resources requested for launch at /media/nvidia/WD_BLUE_2.5_1TB/pytorch/aten/src/THCUNN/generic/SpatialUpSamplingBilinear.cu:67

This is probably the result of pytorch trying to open too many threads at startup, see https://github.com/pytorch/pytorch/issues/7680 (needs to be reduced to 256 from 1024)

I did it ,but it showed error:

Processing ./torch-1.1.0a0+b457266-cp36-cp36m-linux_aarch64.whl
ERROR: Exception:
Traceback (most recent call last):
File “/home/nvidia/.local/lib/python3.6/site-packages/pip/_internal/cli/base_command.py”, line 178, in main
status = self.run(options, args)
File “/home/nvidia/.local/lib/python3.6/site-packages/pip/_internal/commands/install.py”, line 352, in run
resolver.resolve(requirement_set)
File “/home/nvidia/.local/lib/python3.6/site-packages/pip/_internal/resolve.py”, line 131, in resolve
self._resolve_one(requirement_set, req)
File “/home/nvidia/.local/lib/python3.6/site-packages/pip/_internal/resolve.py”, line 294, in _resolve_one
abstract_dist = self._get_abstract_dist_for(req_to_install)
File “/home/nvidia/.local/lib/python3.6/site-packages/pip/_internal/resolve.py”, line 242, in _get_abstract_dist_for
self.require_hashes
File “/home/nvidia/.local/lib/python3.6/site-packages/pip/_internal/operations/prepare.py”, line 353, in prepare_linked_requirement
progress_bar=self.progress_bar
File “/home/nvidia/.local/lib/python3.6/site-packages/pip/_internal/download.py”, line 873, in unpack_url
unpack_file_url(link, location, download_dir, hashes=hashes)
File “/home/nvidia/.local/lib/python3.6/site-packages/pip/_internal/download.py”, line 807, in unpack_file_url
unpack_file(from_path, location, content_type, link)
File “/home/nvidia/.local/lib/python3.6/site-packages/pip/_internal/utils/misc.py”, line 628, in unpack_file
flatten=not filename.endswith(‘.whl’)
File “/home/nvidia/.local/lib/python3.6/site-packages/pip/_internal/utils/misc.py”, line 505, in unzip_file
zip = zipfile.ZipFile(zipfp, allowZip64=True)
File “/usr/lib/python3.6/zipfile.py”, line 1131, in init
self._RealGetContents()
File “/usr/lib/python3.6/zipfile.py”, line 1198, in _RealGetContents
raise BadZipFile(“File is not a zip file”)
zipfile.BadZipFile: File is not a zip file

what can I do?

Hi 1572710121,
it seems your wheel file is broken, please check the integrity of the downloaded wheel.
MD5 (torch-1.1.0a0+b457266-cp36-cp36m-linux_aarch64.whl) = a08c545c05651e6a9651010c13f3151f

Hi dusty,
Reinstalling numpy fixed the problem. Thank you!

However, while trying to install torchvision via pip, I ran into the following problems

setuptools.sandbox.UnpickleableException: RequiredDependencyException('\n\nThe headers or library files could not be found for jpeg,\na required dependency when compiling Pillow from source.\n\nPlease see
 the install instructions at:\n   https://pillow.readthedocs.io/en/latest/installation.html\n\n',)

#EDIT
Running the following fixed the issues.

sudo apt-get install libjpeg-dev zlib1g-dev
pip install Pillow

The Python 3 version of https://nvidia.box.com/shared/static/veo87trfaawj5pfwuqvhl6mzc5b55fbj.whl not getting downloaded fully. Anybody facing such issue?

I am a newbie here. When i am pip install torch-1.1.0a0+b457266-cp27-cp27mu-linux_aarch64.whl,using two commands offered above in section python2.7, I am faced with torch-1.1.0a0+b457266-cp27-cp27mu-linux_aarch64.whl is not a supported wheel on this platform and you are using pip version 18.1,however version 19.1 is available. you should consider upgrading via the pip install --upgrade pip command.What should l do?

Hi benchicc, if it says not a supported wheel on this platforms, that most often means you are trying to install a Python 2.7 wheel on Python 3, or vice versa (or you are trying to install the aarch64 wheel on x86 or vice versa). Are you sure you are installing it against the right Python version?

@dusty_nv,
I can’t access to “https://nvidia.box.com/shared/static/m6vy0c7rs8t1alrt9dqf7yt1z587d1jk.whl”, when I input “wget https://nvidia.box.com/shared/static/m6vy0c7rs8t1alrt9dqf7yt1z587d1jk.whl -O torch-1.1.0a0+b457266-cp27-cp27mu-linux_aarch64.whl”, I got the error: Connecting to nvidia.box.com (nvidia.box.com)|146.112.61.106|:443… failed: Connection refused. are the binaries still there? looking forward to your reply

Hi Xiao, yes it is still up, if your access is being blocked, please see this post:

[url]https://devtalk.nvidia.com/default/topic/1049071/jetson-nano/pytorch-for-jetson-nano/post/5331080/#5331080[/url]

Suggestion – can NVIDIA please put all of this stuff into the Debian repository for all versions and all JetPacks? It would be awesome if we could just apt-get install tensorflow, apt-get install caffe2, apt-get install pytorch, apt-get install ros, apt-get install mxnet, etc.

It kind of bugs me that we can autonomously drive a car but we can’t autonomously install software and have to do so many steps. apt-get completely does Level 4 autonomy for software installation including dependency management – would be really really awesome if we could make use it of it so that nobody needs to come to the forums to search “how to install X on Jetson”. I do appreciate all the help on the forums but just a strong suggestion here to make it painless and frictionless to adopt a Jetson system without having to dig through forums would be awesome for your sales as well.

1 Like

Hi wuxiekeji, for NVIDIA sw we are doing this, with packages like CUDA toolkit, cuDNN, TensorRT, ect. provided as DEB packages. We are working on OTA updates to the NVIDIA packages through apt.

The third-party Python packages are built as pip wheels, not DEB’s. ROS is actually installed from apt as DEB, you just have to add ROS’s apt repository (following the normal install procedure from ROS.org)

What we are doing is adding these third-party libraries as “Add-On Packages” to JetPack that you can optionally select to install from an app-store-like interface. We hope this will make them easier for you to find. If they were all installed by default, the SD card image would be a lot bigger, probably over the 16GB recommended minimum SD card capacity.

Thank you, that would be amazing to have those options in JetPack and NVIDIA maintain updated deb packages in a repo. In general anything that minimizes needing to dig through forums and Googling or bookmarking forum threads to install stuff would be awesome.

now I have alreadtly installed torch-1.1.0a0+b457266-cp27-cp27mu-linux_aarch64.whl .However ,linux told me that I need to install builtins.When l was installing builtins, l cannot install future. can someone help me?

1 Like

On this link: https://devtalk.nvidia.com/default/topic/1049071/pytorch-for-jetson-nano/
with these modifications:
WAS: $ sudo ~/jetson_clocks.sh
NOW: $ sudo /usr/bin/jetson_clocks
WAS: $ sudo pip3 install -U setuptools
NOW: $ sudo -H pip3 install -U setuptools
WAS: $ sudo pip3 install -r requirements.txt
NOW: $ sudo -H pip3 install -r requirements.txt

at: $ python setup.py bdist_wheel
I received the following error:
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File “setup.py”, line 728, in
build_deps()
File “setup.py”, line 294, in build_deps
build_dir=‘build’)
File “/home/gerardg/pytorch/tools/build_pytorch_libs.py”, line 293, in build_caffe2
check_call(ninja_cmd, cwd=build_dir, env=my_env)
File “/usr/lib/python3.6/subprocess.py”, line 291, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command ‘[‘ninja’, ‘install’, ‘-v’]’ returned non-zero exit status 1.

On this link: https://devtalk.nvidia.com/default/topic/1042196/jetson-tx2/jetson-tx2-pytorch-install-problem/post/5334375/#5334375
the moderator AastaLLL recommended:
If JetPack4.2 is acceptable, you can install the package shared in this comment directly:
https://devtalk.nvidia.com/default/topic/1049071/pytorch-for-jetson-nano/

On this link: Jetson Download Center | NVIDIA Developer
JetPack 4.2 for […] Jetson Nano is available […] For the Jetson Nano Developer Kit.
Simply download this SD card image and follow the steps at Getting Started with Jetson Nano Developer Kit.
JetPack4.2 and L4T R32.1 were announced 03/19/2019
I downloaded the SD card image on 05/04/2019
so I believe I am on JetPack 4.2

But AastaLLL’s recomended link is this link which I was following when I received the error.

$ sudo -H pip3 install -r requirements.txt
shows Requirement already satisfied for the following:
future, numpy, pyyaml, setuptools, six, typing
before I run
$ python3 setup.py bdist_wheel

Comment on 05/16/2019 after another attempt
There are 90 lines of errors. Each line about 2,700 characters long.
Each line has caffe2 in it. after line 85 is this line:
FAILED: caffe2/CMakeFiles/caffe2.dir/contrib/aten/aten_op.cc.o
I can send the errors if you want them.
I am currently looking at:

However I am on 18.04

Thank you and Peace.

Hi Dusty!

I installed Pytorch following your instructions, everything went fine.
But when I tried to import torch in Python, there is a version mismatch:

$ python3
Python 3.6.7 (default, Oct 22 2018, 11:32:17)
[GCC 8.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> print(torch.__version__)
1.1.0a0+b457266

Checking the Pytorch build in /Downloads/pytorch/torch/version.py the version is 1.1.0a0+e79610c. So I tried to specify the PYTHONPATH variable:

$ export PYTHONPATH="${PYTHONPATH}:/home/robin/Downloads/pytorch/"

Trying to import torch again gives the following error:

$ python3
Python 3.6.7 (default, Oct 22 2018, 11:32:17)
[GCC 8.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/robin/Downloads/pytorch/torch/__init__.py", line 79, in <module>
    from torch._C import *
ModuleNotFoundError: No module named 'torch._C'

(I am quite new to Linux and building with Cmake)
Any suggestions on what to do?

Try pip3 uninstall torch and sudo pip3 uninstall torch until nothing is uninstalled
Then install the pytorch wheel. What you describe is usually because of multiple versions installed

robinniwood, I think that slightly different version is just because I built the Python3 pip wheel at a later date then the Python2 wheel, you should be able to ignore it.

I also have another set build (should be identical versions in this set I think) linked from the PyTorch GitHub here:

[url]pytorch/README.md at master · pytorch/pytorch · GitHub