Install a CUDA compiled version of Torch in Jetson Orin AGX with jetpack 6.2

Hi Team,

Can you please point me to a source from which I can install a cuda compiled version of pytorch in an Orin AGX running on Jetpack 6.2 ?

I tried using the https://pypi.jetson-ai-lab.dev/jp6/cu126.

This is down currently. Can someone help me on this?

*** Please note that this reply is generated by LLM automatically ***

To install a CUDA-compiled version of PyTorch on a Jetson AGX Orin running Jetpack 6.2, you have a few options. Since the link you provided (https://pypi.jetson.ai/lab/dev/jp6/cu126) is currently down, I’ll guide you through alternative methods.

1. Official PyTorch Installation

First, ensure your Jetson AGX Orin is properly set up with Jetpack 6.2. PyTorch provides official support for NVIDIA Jetson devices, including the AGX Orin. You can try installing PyTorch using the official PyTorch installation script or by manually installing the CUDA toolkit and then PyTorch.

Using the Official PyTorch Installation Script

PyTorch offers a simple installation script that can automatically detect your CUDA version and install the appropriate PyTorch version. However, this method might not always work perfectly for Jetson devices due to their unique configurations.

  1. Update your package list: sudo apt update
  2. Install necessary dependencies: sudo apt install libopenblas-base libopenmpi-dev
  3. Install PyTorch: You can use the PyTorch installation script from their official website. Visit the PyTorch website, select your options (including CUDA version if applicable), and run the provided command.

Manual Installation

If the script doesn’t work, you can manually install the CUDA toolkit and then PyTorch.

  1. Install CUDA Toolkit: Ensure you have the correct version of the CUDA toolkit installed for Jetpack 6.2. You can download it from the NVIDIA Developer website. Follow the installation instructions provided with the download.

  2. Install PyTorch: After installing the CUDA toolkit, you can install PyTorch using pip. PyTorch provides a get-started page where you can select your setup and get a customized installation command.

2. Using NVIDIA’s Jetson PyTorch Wheels

NVIDIA provides PyTorch wheels specifically compiled for Jetson devices, including the AGX Orin, through their Jetson Zoo project. These wheels are optimized for the Jetson’s hardware and can be a reliable way to get PyTorch working with CUDA support.

  1. Visit the Jetson Zoo: Go to the Jetson Zoo page and look for the PyTorch section related to your Jetpack version (6.2 in your case).
  2. Install PyTorch: Follow the instructions provided on the Jetson Zoo page to install the PyTorch wheel. This usually involves using pip to install from a specific wheel file or repository.

3. Compile PyTorch from Source

If pre-compiled wheels are not available or do not work for your specific setup, you can try compiling PyTorch from source. This method requires more effort and technical expertise but provides the most flexibility.

  1. Clone the PyTorch Repository: Clone the PyTorch repository from GitHub.
  2. Install Dependencies: Install all the dependencies required to build PyTorch. This includes CUDA, cuDNN, and other libraries.
  3. Configure and Build: Configure the build process to use your CUDA version and then build PyTorch.

Conclusion

Installing a CUDA-compiled version of PyTorch on a Jetson AGX Orin running Jetpack 6.2 can be achieved through the official PyTorch installation methods, using NVIDIA’s Jetson PyTorch wheels, or by compiling PyTorch from source. Each method has its pros and cons, and the best approach depends on your specific requirements and the current state of support and documentation for the Jetson AGX Orin.

*** Please give the thumbs up if you get answers you like. Or provide feedback to help us improve the answer. ***

Hi,

Please use the jp6/cu126 index instead.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.