Pytorch with Cuda 11.8 on Jetson ORIN AGX ( aarch64 )

Hi,

I am trying to build a new pytorch environment. The project requirements specified Cuda 11.7 - 11.x but below 12.x.

How can I build for Cuda 11.8 ? (as I recall 11.7 isn’t available on Jetson Orin AGX) I am having issues finding the right wheel for either python 3.8 or 3.9

I found this

PyTorch v2.1.0

However, I am getting issues installing torchvision and torchaudio with it. (torchvision decides for me that I need another torch version instead :) and installs it… )

In this case I am not using a docker but a conda environment so Torch-L4T container probably won’t work for this case. is there another wheel that could work?

I am also happy to build from source (as below) but so far was running into issues doing that

export TORCH_CUDA_ARCH_LIST=“8.6”
export CUDA_HOME=/usr/local/cuda-11.8 # Set CUDA_HOME to CUDA 11.8 installation path

USE_CUDA=1
pip install git+https://github.com/pytorch/pytorch.git
torchvision
torchaudio
cmake
–global-option=“build_ext”
–global-option=“-I$CUDA_HOME/include”
–global-option=“-L$CUDA_HOME/lib64”
–global-option=“-lcudart”

Any idea?

Hi @hg1, you can see the process that I use for building PyTorch wheels in this Dockerfile (there are similar Dockerfiles for building torchvision and torchaudio)

You can basically just follow the ENV variables that I set and the RUN commands that are executed in the shell. There are also build commands for PyTorch and torchvision in PyTorch for Jetson.

Shortly after you begin the PyTorch build, it will print out a configuration summary before it actually starts compiling, and I recommend that you check that summary to make sure the options that you want are enabled (like your desired version of CUDA), before waiting for the entire build to complete just to realize that something didn’t get detected or enabled earlier.

Thank you @dusty_nv I haven’t actually used dockerfile before ( just sticking all params into the docker --run flag), but I’ll take a look. (maybe I’ll ask an AI engine to convert it to a bash script for me :) )
meanwhile I was able to build by using
python setup.py install --cmake “-DPYTHON_EXECUTABLE=$(which python) -DCUDA_ARCH_NAME=Manual -DCUDA_ARCH_BIN=8.6 -DCUDA_ARCH_PTX=8.6”
and it finds the Cuda 11.8 so I might be ‘out of the woods’. Let’ see…

OK cool, good luck! :D

If you just want to “run” the commands in the dockerfile by hand, typically just run what is next to the RUN statements in your shell (those are just bash commands)

ahh, that looks easy :)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.