PyTorch for Jetson - version 1.7.0 now available

In fact I’m using a VPN from Los Angeles, with access to Google, Twitter, etc. But it just somehow filed connecting to
It would be so nice of you if you could down the wheel for python2 and upload to baidu driver.
Big thanks.

Hi, After I installed the PyTorch wheel for Python3, this is what I got. ( I am using Nano, and just flash it)

~$ python3
Python 3.6.7 (default, Oct 22 2018, 11:32:17)
[GCC 8.2.0] on linux
Type “help”, “copyright”, “credits” or “license” for more information.

import numpy
import torch
Traceback (most recent call last):
File “”, line 1, in
File “/usr/local/lib/python3.6/dist-packages/torch/”, line 79, in
from torch._C import *
ImportError: /usr/local/lib/python3.6/dist-packages/torch/lib/ undefined symbol: _Py_ZeroStruct

Could you tell me is there other packages that I need to install?

Thank you

Torch wheel for Python2.7 @Baidu NetDriver:
Code: j2eq
MD5 (torch-1.1.0a0+b457266-cp27-cp27mu-linux_aarch64.whl) = 6515e05c0eb1437ca9fa187391df7505

For those behind the GFW who can’t download the wheel from, please use the following Baidu NetDriver Linker:

PyTorch for Python 2.7
Code: j2eq
MD5 (torch-1.1.0a0+b457266-cp27-cp27mu-linux_aarch64.whl) = 6515e05c0eb1437ca9fa187391df7505

PyTorch for Python 3.6
Code : ribc
MD5 (torch-1.1.0a0+b457266-cp36-cp36m-linux_aarch64.whl) = a08c545c05651e6a9651010c13f3151f

Happy hacking :)

As advised, I reflashed the Xavier with Jetpack 4.2. But it still is not working.
I created a new virutal environment using virtual-env and installed the pre-built wheel for python 3 specified in the top post.

>>> import torch
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/home/nvidia/python3/lib/python3.6/site-packages/torch/", line 97, in <module>
        from torch._C import *
ImportError: numpy.core.multiarray failed to import
1 Like

Hi haranbolt, can you check from an interactive Python terminal if you are able to import numpy and import numpy.core.multiarray? If so, it may be related to using virtualenv.

I also tried installing on the base installation of python3. But I still get the same error.

Are you able manually import numpy and numpy.core.multiarray from an I interactive Python shell? If not, you may need to reinstall numpy.


Installed for python3 and in general it behaves well. However, when trying to get cornernet to work ( I get :

RuntimeError: cuda runtime error (7) : too many resources requested for launch at /media/nvidia/WD_BLUE_2.5_1TB/pytorch/aten/src/THCUNN/generic/

This is probably the result of pytorch trying to open too many threads at startup, see (needs to be reduced to 256 from 1024)

I did it ,but it showed error:

Processing ./torch-1.1.0a0+b457266-cp36-cp36m-linux_aarch64.whl
ERROR: Exception:
Traceback (most recent call last):
File “/home/nvidia/.local/lib/python3.6/site-packages/pip/_internal/cli/”, line 178, in main
status =, args)
File “/home/nvidia/.local/lib/python3.6/site-packages/pip/_internal/commands/”, line 352, in run
File “/home/nvidia/.local/lib/python3.6/site-packages/pip/_internal/”, line 131, in resolve
self._resolve_one(requirement_set, req)
File “/home/nvidia/.local/lib/python3.6/site-packages/pip/_internal/”, line 294, in _resolve_one
abstract_dist = self._get_abstract_dist_for(req_to_install)
File “/home/nvidia/.local/lib/python3.6/site-packages/pip/_internal/”, line 242, in _get_abstract_dist_for
File “/home/nvidia/.local/lib/python3.6/site-packages/pip/_internal/operations/”, line 353, in prepare_linked_requirement
File “/home/nvidia/.local/lib/python3.6/site-packages/pip/_internal/”, line 873, in unpack_url
unpack_file_url(link, location, download_dir, hashes=hashes)
File “/home/nvidia/.local/lib/python3.6/site-packages/pip/_internal/”, line 807, in unpack_file_url
unpack_file(from_path, location, content_type, link)
File “/home/nvidia/.local/lib/python3.6/site-packages/pip/_internal/utils/”, line 628, in unpack_file
flatten=not filename.endswith(’.whl’)
File “/home/nvidia/.local/lib/python3.6/site-packages/pip/_internal/utils/”, line 505, in unzip_file
zip = zipfile.ZipFile(zipfp, allowZip64=True)
File “/usr/lib/python3.6/”, line 1131, in init
File “/usr/lib/python3.6/”, line 1198, in _RealGetContents
raise BadZipFile(“File is not a zip file”)
zipfile.BadZipFile: File is not a zip file

what can I do?

Hi 1572710121,
it seems your wheel file is broken, please check the integrity of the downloaded wheel.
MD5 (torch-1.1.0a0+b457266-cp36-cp36m-linux_aarch64.whl) = a08c545c05651e6a9651010c13f3151f

Hi dusty,
Reinstalling numpy fixed the problem. Thank you!

However, while trying to install torchvision via pip, I ran into the following problems

setuptools.sandbox.UnpickleableException: RequiredDependencyException('\n\nThe headers or library files could not be found for jpeg,\na required dependency when compiling Pillow from source.\n\nPlease see
 the install instructions at:\n\n\n',)

Running the following fixed the issues.

sudo apt-get install libjpeg-dev zlib1g-dev
pip install Pillow

The Python 3 version of not getting downloaded fully. Anybody facing such issue?

I am a newbie here. When i am pip install torch-1.1.0a0+b457266-cp27-cp27mu-linux_aarch64.whl,using two commands offered above in section python2.7, I am faced with torch-1.1.0a0+b457266-cp27-cp27mu-linux_aarch64.whl is not a supported wheel on this platform and you are using pip version 18.1,however version 19.1 is available. you should consider upgrading via the pip install --upgrade pip command.What should l do?

Hi benchicc, if it says not a supported wheel on this platforms, that most often means you are trying to install a Python 2.7 wheel on Python 3, or vice versa (or you are trying to install the aarch64 wheel on x86 or vice versa). Are you sure you are installing it against the right Python version?

I can’t access to “”, when I input “wget -O torch-1.1.0a0+b457266-cp27-cp27mu-linux_aarch64.whl”, I got the error: Connecting to (||:443… failed: Connection refused. are the binaries still there? looking forward to your reply

Hi Xiao, yes it is still up, if your access is being blocked, please see this post:

Suggestion – can NVIDIA please put all of this stuff into the Debian repository for all versions and all JetPacks? It would be awesome if we could just apt-get install tensorflow, apt-get install caffe2, apt-get install pytorch, apt-get install ros, apt-get install mxnet, etc.

It kind of bugs me that we can autonomously drive a car but we can’t autonomously install software and have to do so many steps. apt-get completely does Level 4 autonomy for software installation including dependency management – would be really really awesome if we could make use it of it so that nobody needs to come to the forums to search “how to install X on Jetson”. I do appreciate all the help on the forums but just a strong suggestion here to make it painless and frictionless to adopt a Jetson system without having to dig through forums would be awesome for your sales as well.

1 Like

Hi wuxiekeji, for NVIDIA sw we are doing this, with packages like CUDA toolkit, cuDNN, TensorRT, ect. provided as DEB packages. We are working on OTA updates to the NVIDIA packages through apt.

The third-party Python packages are built as pip wheels, not DEB’s. ROS is actually installed from apt as DEB, you just have to add ROS’s apt repository (following the normal install procedure from

What we are doing is adding these third-party libraries as “Add-On Packages” to JetPack that you can optionally select to install from an app-store-like interface. We hope this will make them easier for you to find. If they were all installed by default, the SD card image would be a lot bigger, probably over the 16GB recommended minimum SD card capacity.

Thank you, that would be amazing to have those options in JetPack and NVIDIA maintain updated deb packages in a repo. In general anything that minimizes needing to dig through forums and Googling or bookmarking forum threads to install stuff would be awesome.