Clear guide to install all the AI model training components

Description

Hi. Can we please have a clear step-by-step guide to installing all the RIGHT AI model training components for the Jetson Orin Nano including transformers, pipeline, torch, tokenizer, numpy … Currently, the process is like a leaky bucket with tons of holes. Everytime I plug one, another one leaks. If I customize torch to the specific recommended release based on the Jetpack release I find in the references, transformers breaks down or tokenizer breaks down or numpy breaks down and vice versa … If I downgrade numpy as shown in one error message, torch breaks down, … Please help. Thanks!

Environment

TensorRT Version:
GPU Type: Jetson Orin Nano Super
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi @kegintheai,

Welcome to the community! I completely understand your frustration. You are experiencing what we affectionately call “Dependency Hell,” and your “leaky bucket” analogy is spot on.

The root of the issue is that standard pip install commands fetch packages built for desktop architectures (x86_64). Your Jetson Orin Nano uses an ARM64 architecture, and its GPU drivers, CUDA, cuDNN, and TensorRT are tightly integrated into JetPack (NVIDIA’s OS image for Jetson). When you try to upgrade or install a standard Python package, it often overwrites a low-level library with an incompatible version, breaking the whole chain.

The Container Approach

Instead of installing directly on the host OS, we use NVIDIA’s official, pre-compiled environments. Our community relies heavily on the jetson-containers repository maintained by NVIDIA engineers. It automatically pulls Docker containers that have PyTorch, Transformers, TensorRT, and cuDNN perfectly matched to your specific JetPack version.

  1. Clone the repository:
git clone https://github.com/dusty-nv/jetson-containers
cd jetson-containers

  1. Install system requirements: (Docker is usually pre-installed on JetPack)
bash install.sh

  1. Run the autotag command: This detects your JetPack version and launches a container with everything pre-installed.
./run.sh $(./autotag transformers)

Note: Once inside this container, torch, transformers, numpy, and tokenizers will all work in harmony, and you won’t have to worry about breaking your system libraries.


Please let me know if this works or not.

1 Like

Hi @athkumar,

Thank you so much for your valuable and informative feedback! I truly appreciate it!
What you’re saying makes perfect sense. I went the hard way and managed to install them all manually after much search and trial and error. It was a good learning experience but I will follow your method for sure moving forward! Cheers!

I would like to add that having AWESOME people like you who take the time to answer and teach is what makes the whole Developer experience amazing and worthwhile. Thank you so much! We truly appreciate it!

1 Like

Heu @kegintheai , Happy to know this !

Happy to know this!

Thank you so much for the feedback, it really boosts our morale and motivates us to do even better for the community!

Best regards,
Atharva Kumar

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.