Hi @max_maier, my guess is if you run the CUDA deviceQuery, it will show 0 devices. How did you get CUDA 11.2 on your Xavier NX? It is not a supported version of CUDA in JetPack. If you downloaded the ARM SSBA drivers online, those won’t work on Jetson because those drivers are for discrete GPU (over PCIe) whereas Jetson uses integrated GPUs. There are also underlying dependencies in JetPack / L4T. So you would need to re-flash or re-install the original version of CUDA that came with your JetPack.
Hi @leebin, it appears that your .whl file may be corrupted, and you may want to try downloading it again.
Hi,
I tried to install PyTorch v1.7.0 following the instructions above but I’m getting error when importing torch:
import torch
Illegal instruction (core dumped)
(I’m using Python3.6, Cuda-10.2.)
Could you please help with the issue?
Thanks.
Hi @ani.karapetyan, please see this post - https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-8-0-now-available/72048/746?u=dusty_nv
Try running export OPENBLAS_CORETYPE=ARMV8
beforehand or downgrade numpy to 1.18.2
Running export OPENBLAS_CORETYPE=ARMV8 solved my problem.
Thanks a lot!
trying to get this working with Jupyter notebook but I am running into an issue. running import torch the notebook spits out:
OSError: /usr/lib/aarch64-linux-gnu/libgomp.so.1: cannot allocate memory in static TLS block
I tried export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libgomp.so.1 but it didn’t help. It seems to be working though because using console I get:
import torch
torch.cuda.is_available()
True
I previously had torch working on Jupyter notebook but it was working on the cpu and since installing the above version that has stopped working as well. Any help would be appreciated. Thanks!
Hi @bretg57, please refer to this thread: https://forums.developer.nvidia.com/t/oserror-usr-lib-aarch64-linux-gnu-libgomp-so-1-only-in-jupyter-notebook/174881
Hello Dusty,
many thanks for your quick response.
I’ve installed CUDA 11.2 following these instructions: Install CUDA 11 on Jetson Nano and Xavier NX - Latest Open Tech From Seeed
Hmm… ist seemed to work
Thanks! I was able to figure it out based on the solution there. The answer given is not very clear though so I will just post it here in case anyone else has the same problem:
If you are running Jupyter notebook through a systemd service like I was, then you need to add:
[Service]
Environment=“LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libgomp.so.1”
to the systemd service entry located at /etc/systemd/system/nameofjupyterservice.service then do a daemon-reload and systemctl restart nameofjupyterservice
Thanks a million for this.
Hi again @dusty_nv. I seem to be trapped in cycle of impossible versions, and can’t get it to work on the Jetson TX2. Any chance that there will be pip wheels for older versions of PyTorch such as 0.4.1?
Hi @ian.blake, the earliest pre-built pip wheels were for PyTorch v1.0, and I don’t plan to go back and build older versions, sorry about that.
From what I recall, PyTorch v0.4 and newer has a mostly backwards-compatible API. Have you tried running your code on newer PyTorch or updating it?
I was able to install a later version (1.3) and had to update a few depreciated lines of code but was able to make it work. Thanks for the help!
OK great, glad to hear it. I think you are much better off now having your code updated, then you can use more recent PyTorch and aren’t stuck on old version forever.
Dozens of pages people just shout EXPORT no one bothered to say exactly what you did this solved my issue thank you.
Hello, can I use python3.8 to install pyTorch?
Hi @czq99, you would need to build the PyTorch wheel from source for Python 3.8. There are some folks on this topic who have done it for Python 3.7 and 3.8. Here is also a post about it:
Hello @dusty_nv
do you have the wheel for pytorch1.8.1?
I don’t typically build the minor versions, will wait for PyTorch 1.9
I’m having the same problem; I can run g++
and build-essential
is the newest version (12.4ubuntu1).