Hi,
I am trying to run pytorch on my Jetson Xavier NX. I have CUDA 10.2 and have installed pytorch 1.8 for CUDA 10.2 but pytorch does not recognize that I have CUDA available with the following command:
print('CUDA available: ' + str(torch.cuda.is_available()))
It would be great if anyone could help me work out why its not working.
C
Hi,
How do you install the PyTorch package.
Could you try our package in the below topic or NGC l4t-pytorch container?
Below are pre-built PyTorch pip wheel installers for Jetson Nano, TX1/TX2, Xavier, and Orin with JetPack 4.2 and newer.
Download one of the PyTorch binaries from below for your version of JetPack, and see the installation instructions to run on your Jetson. These pip wheels are built for ARM aarch64 architecture, so run these commands on your Jetson (not on a host PC). You can also use the pre-built l4t-pytorch and l4t-ml container images and Dockerfiles .
PyTorch pip wheels
JetPack 5
PyTo…
Thanks.
Hi Thanks for your response,
I just tried your link. Is there any way doing this without using a container?
When I try to install the local version I get the following error ERROR: torch-1.7.0-cp36-cp36m-linux_aarch64.whl is not a supported wheel on this platform. This was using
pip3 install torch-1.7.0-cp36-cp36m-linux_aarch64.whlThanks, Charlie
Hmm did you install the PyTorch 1.8 wheel from that post that Aasta linked to also? Is this the URL you used for the PyTorch 1.7 wheel?
https://nvidia.box.com/shared/static/cs3xn3td6sfgtene6jdvsxlr366m2dhq.whl
What version of JetPack-L4T are you running? You can check your L4T version with cat /etc/nv_tegra_release
Is there any different if you run deviceQuery sample first, before trying to run PyTorch command?
cd /usr/local/cuda/samples/1_Utilities/deviceQuery
sudo make
./deviceQuery
Does deviceQuery report your GPU? If so, does PyTorch detect the GPU after running deviceQuery?