How to install Tensorflow 2.17 with GPU in Jetson Nano Developer Kit with Jetpack 6.1 running on Ubuntu 22.04

I tried installing Tensorflow 2.17 as it said jetpack 6.1 has support for it Installing TensorFlow for Jetson Platform - NVIDIA Docs

And after installing it , i keep getting gpu not detected under tensorflow

Then i found this forum Could you please let me know how to install tf2.17.0 on Jetson agx orin of jetpack 6.1 without docker container?

Where this gave me information to install 2.18 but that as well didnt work for me

This was the output it gave me after i tried

These were the logs while i tried to install 2.18

pip3 install --extra-index-url https://pypi.jetson-ai-lab.dev/jp6/cu126 tensorflow==2.18.0
Defaulting to user installation because normal site-packages is not writeable
Looking in indexes: Simple index, https://pypi.jetson-ai-lab.dev/jp6/cu126
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by ‘NewConnectionError(’<pip._vendor.urllib3.connection.HTTPSConnection object at 0xffff916df7f0>: Failed to establish a new connection: [Errno 113] No route to host’)‘: /jp6/cu126/tensorflow/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by ‘NewConnectionError(’<pip._vendor.urllib3.connection.HTTPSConnection object at 0xffff916dfac0>: Failed to establish a new connection: [Errno 113] No route to host’)‘: /jp6/cu126/tensorflow/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by ‘NewConnectionError(’<pip._vendor.urllib3.connection.HTTPSConnection object at 0xffff916dfca0>: Failed to establish a new connection: [Errno 113] No route to host’)‘: /jp6/cu126/tensorflow/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by ‘NewConnectionError(’<pip._vendor.urllib3.connection.HTTPSConnection object at 0xffff916dfeb0>: Failed to establish a new connection: [Errno 113] No route to host’)‘: /jp6/cu126/tensorflow/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by ‘NewConnectionError(’<pip._vendor.urllib3.connection.HTTPSConnection object at 0xffff916dff70>: Failed to establish a new connection: [Errno 113] No route to host’)‘: /jp6/cu126/tensorflow/
Collecting tensorflow==2.18.0
Downloading tensorflow-2.18.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.metadata (4.0 kB)
Requirement already satisfied: absl-py>=1.0.0 in ./.local/lib/python3.10/site-packages (from tensorflow==2.18.0) (2.3.1)
Requirement already satisfied: astunparse>=1.6.0 in ./.local/lib/python3.10/site-packages (from tensorflow==2.18.0) (1.6.3)
Requirement already satisfied: flatbuffers>=24.3.25 in ./.local/lib/python3.10/site-packages (from tensorflow==2.18.0) (25.2.10)
Requirement already satisfied: gast!=0.5.0,!=0.5.1,!=0.5.2,>=0.2.1 in ./.local/lib/python3.10/site-packages (from tensorflow==2.18.0) (0.6.0)
Requirement already satisfied: google-pasta>=0.1.1 in ./.local/lib/python3.10/site-packages (from tensorflow==2.18.0) (0.2.0)
Requirement already satisfied: libclang>=13.0.0 in ./.local/lib/python3.10/site-packages (from tensorflow==2.18.0) (18.1.1)
Requirement already satisfied: opt-einsum>=2.3.2 in ./.local/lib/python3.10/site-packages (from tensorflow==2.18.0) (3.4.0)
Requirement already satisfied: packaging in /usr/local/lib/python3.10/dist-packages (from tensorflow==2.18.0) (25.0)
Requirement already satisfied: protobuf!=4.21.0,!=4.21.1,!=4.21.2,!=4.21.3,!=4.21.4,!=4.21.5,<6.0.0dev,>=3.20.3 in ./.local/lib/python3.10/site-packages (from tensorflow==2.18.0) (4.21.12)
Requirement already satisfied: requests<3,>=2.21.0 in /usr/lib/python3/dist-packages (from tensorflow==2.18.0) (2.25.1)
Requirement already satisfied: setuptools in /usr/lib/python3/dist-packages (from tensorflow==2.18.0) (59.6.0)
Requirement already satisfied: six>=1.12.0 in /usr/lib/python3/dist-packages (from tensorflow==2.18.0) (1.16.0)
Requirement already satisfied: termcolor>=1.1.0 in ./.local/lib/python3.10/site-packages (from tensorflow==2.18.0) (3.1.0)
Requirement already satisfied: typing-extensions>=3.6.6 in /usr/local/lib/python3.10/dist-packages (from tensorflow==2.18.0) (4.14.1)
Requirement already satisfied: wrapt>=1.11.0 in ./.local/lib/python3.10/site-packages (from tensorflow==2.18.0) (1.14.1)
Requirement already satisfied: grpcio<2.0,>=1.24.3 in ./.local/lib/python3.10/site-packages (from tensorflow==2.18.0) (1.73.1)
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by ‘NewConnectionError(’<pip._vendor.urllib3.connection.HTTPSConnection object at 0xffff90f5fee0>: Failed to establish a new connection: [Errno 113] No route to host’)‘: /jp6/cu126/tensorboard/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by ‘NewConnectionError(’<pip._vendor.urllib3.connection.HTTPSConnection object at 0xffff90f4c190>: Failed to establish a new connection: [Errno 113] No route to host’)‘: /jp6/cu126/tensorboard/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by ‘NewConnectionError(’<pip._vendor.urllib3.connection.HTTPSConnection object at 0xffff90f4c340>: Failed to establish a new connection: [Errno 113] No route to host’)‘: /jp6/cu126/tensorboard/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by ‘NewConnectionError(’<pip._vendor.urllib3.connection.HTTPSConnection object at 0xffff90f4c4f0>: Failed to establish a new connection: [Errno 113] No route to host’)‘: /jp6/cu126/tensorboard/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by ‘NewConnectionError(’<pip._vendor.urllib3.connection.HTTPSConnection object at 0xffff90f4c6a0>: Failed to establish a new connection: [Errno 113] No route to host’)': /jp6/cu126/tensorboard/
Collecting tensorboard<2.19,>=2.18 (from tensorflow==2.18.0)
Downloading tensorboard-2.18.0-py3-none-any.whl.metadata (1.6 kB)
Requirement already satisfied: keras>=3.5.0 in ./.local/lib/python3.10/site-packages (from tensorflow==2.18.0) (3.10.0)
Requirement already satisfied: numpy<2.1.0,>=1.26.0 in ./.local/lib/python3.10/site-packages (from tensorflow==2.18.0) (1.26.4)
Requirement already satisfied: h5py>=3.11.0 in ./.local/lib/python3.10/site-packages (from tensorflow==2.18.0) (3.14.0)
Requirement already satisfied: ml-dtypes<0.5.0,>=0.4.0 in ./.local/lib/python3.10/site-packages (from tensorflow==2.18.0) (0.4.1)
Requirement already satisfied: tensorflow-io-gcs-filesystem>=0.23.1 in ./.local/lib/python3.10/site-packages (from tensorflow==2.18.0) (0.37.1)
Requirement already satisfied: markdown>=2.6.8 in ./.local/lib/python3.10/site-packages (from tensorboard<2.19,>=2.18->tensorflow==2.18.0) (3.8.2)
Requirement already satisfied: tensorboard-data-server<0.8.0,>=0.7.0 in ./.local/lib/python3.10/site-packages (from tensorboard<2.19,>=2.18->tensorflow==2.18.0) (0.7.2)
Requirement already satisfied: werkzeug>=1.0.1 in ./.local/lib/python3.10/site-packages (from tensorboard<2.19,>=2.18->tensorflow==2.18.0) (3.1.3)
Requirement already satisfied: wheel<1.0,>=0.23.0 in /usr/lib/python3/dist-packages (from astunparse>=1.6.0->tensorflow==2.18.0) (0.37.1)
Requirement already satisfied: rich in ./.local/lib/python3.10/site-packages (from keras>=3.5.0->tensorflow==2.18.0) (14.0.0)
Requirement already satisfied: namex in ./.local/lib/python3.10/site-packages (from keras>=3.5.0->tensorflow==2.18.0) (0.1.0)
Requirement already satisfied: optree in ./.local/lib/python3.10/site-packages (from keras>=3.5.0->tensorflow==2.18.0) (0.16.0)
Requirement already satisfied: MarkupSafe>=2.1.1 in ./.local/lib/python3.10/site-packages (from werkzeug>=1.0.1->tensorboard<2.19,>=2.18->tensorflow==2.18.0) (3.0.2)
Requirement already satisfied: markdown-it-py>=2.2.0 in ./.local/lib/python3.10/site-packages (from rich->keras>=3.5.0->tensorflow==2.18.0) (3.0.0)
Requirement already satisfied: pygments<3.0.0,>=2.13.0 in ./.local/lib/python3.10/site-packages (from rich->keras>=3.5.0->tensorflow==2.18.0) (2.19.2)
Requirement already satisfied: mdurl~=0.1 in ./.local/lib/python3.10/site-packages (from markdown-it-py>=2.2.0->rich->keras>=3.5.0->tensorflow==2.18.0) (0.1.2)
Downloading tensorflow-2.18.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (231.7 MB)
\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 231.7/231.7 MB 51.6 MB/s eta 0:00:00
Downloading tensorboard-2.18.0-py3-none-any.whl (5.5 MB)
\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 5.5/5.5 MB 46.7 MB/s eta 0:00:00
Installing collected packages: tensorboard, tensorflow
Attempting uninstall: tensorboard
Found existing installation: tensorboard 2.17.1
Uninstalling tensorboard-2.17.1:
Successfully uninstalled tensorboard-2.17.1
Attempting uninstall: tensorflow
Found existing installation: tensorflow 2.17.0
Uninstalling tensorflow-2.17.0:
Successfully uninstalled tensorflow-2.17.0
Successfully installed tensorboard-2.18.0 tensorflow-2.18.0
vishal@vishal-desktop:~$ python3 -c “import tensorflow as tf; print(tf.version, ‘| CUDA Enabled:’, tf.test.is_built_with_cuda(), ‘| GPU Devices:’, tf.config.list_physical_devices(‘GPU’))”
/usr/lib/python3/dist-packages/scipy/init.py:146: UserWarning: A NumPy version >=1.17.3 and <1.25.0 is required for this version of SciPy (detected version 1.26.4
warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion}"
2.18.0 | CUDA Enabled: False | GPU Devices:

Please help

*** Please note that this reply is generated by LLM automatically ***

The issue you’re facing is related to installing TensorFlow 2.17 or 2.18 on your Jetson Orin Nano device with JetPack 4.6.1, and then not being able to detect the GPU under TensorFlow. Let’s break down the steps and issues you’ve encountered:

  1. Installation Attempts: You’ve tried installing TensorFlow 2.17 and 2.18 using pip, specifying the index URL for the Jetson AI Lab repository. However, both attempts resulted in issues, with the primary problem being the inability to establish a connection to the repository due to a “No route to host” error.

  2. Connection Issues: The error messages indicate a failure to connect to the specified repository (jp6cu126). This could be due to network connectivity issues, the repository being down, or the URL being incorrect. Ensure your internet connection is stable and try to access the repository URL directly to verify its accessibility.

  3. GPU Detection Issue: After managing to install TensorFlow (though the exact version that was successfully installed isn’t clear from the provided logs), you encountered an issue where TensorFlow does not detect the GPU. The output from your Python test script shows CUDA Enabled: False and no GPU devices are listed.

  4. Potential Solutions:

    • Verify CUDA Installation: Ensure that CUDA is properly installed and configured on your Jetson device. JetPack 4.6.1 should include CUDA support, but you might need to manually configure environment variables or install additional packages.
    • Check GPU Compatibility: Confirm that your Jetson Orin Nano’s GPU is compatible with the version of TensorFlow you’re trying to install. TensorFlow 2.x supports a wide range of GPUs, but specific versions might have different requirements.
    • Update and Configure: Make sure your system and all packages are up to date. Sometimes, updating the package list and upgrading existing packages can resolve compatibility issues.
    • Environment Variables: Ensure that your environment variables (e.g., PATH, LD_LIBRARY_PATH, CUDA_HOME) are correctly set to point to the CUDA installation. This can often be done by running a setup script provided by the CUDA installation or manually setting these variables in your shell configuration.
    • Alternative Installation Methods: Consider using other installation methods, such as building TensorFlow from source or using a different package manager like apt if available for your specific version of TensorFlow and JetPack.
  5. Comparison and Considerations:

    • Docker vs. Native Installation: While you’ve mentioned not wanting to use Docker, it’s often a straightforward way to get TensorFlow running on Jetson devices, as it encapsulates the environment and dependencies. If you’re open to reconsidering, Docker might provide a simpler path to a working TensorFlow installation.
    • Version Compatibility: Be mindful of the versions of TensorFlow, CUDA, and cuDNN that are compatible with each other and your Jetson device. Using incompatible versions can lead to issues like the ones you’re experiencing.
  6. Conclusion: The primary issues seem to be related to connectivity problems during installation and potential configuration or compatibility problems with CUDA and the GPU. Addressing these areas should help resolve the issues and allow TensorFlow to detect and utilize the GPU on your Jetson Orin Nano. If problems persist, consider seeking help from NVIDIA’s forums or official support channels, as they may provide more specific guidance tailored to your hardware and software configuration.

*** Please give the thumbs up if you get answers you like. Or provide feedback to help us improve the answer. ***

Hi,

Please note that the apt server link has been changed to jp6/cu126 index, and we don’t support TensorFlow anymore.
So the package you downloaded is not built with CUDA support.

To install TensorFlow with GPU support, please build it from the source.
You can find the steps in the script below:

Thanks.

When i run the script i get these errors

I saw in the readme file that this is only supported for ubuntu 24?
Is that true?

+ echo 'Building Tensorflow 2.17'
Building Tensorflow 2.17
+ wget https://apt.llvm.org/llvm.sh
--2025-07-28 12:01:05--  https://apt.llvm.org/llvm.sh
Resolving apt.llvm.org (apt.llvm.org)... 2a04:4e42:43::561, 199.232.26.49
Connecting to apt.llvm.org (apt.llvm.org)|2a04:4e42:43::561|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 7394 (7.2K) [application/octet-stream]
Saving to: \u2018llvm.sh.2\u2019

llvm.sh.2           100%[===================>]   7.22K  --.-KB/s    in 0.001s  

2025-07-28 12:01:05 (7.17 MB/s) - \u2018llvm.sh.2\u2019 saved [7394/7394]

+ chmod u+x llvm.sh
+ ./llvm.sh 20 all
+ CURRENT_LLVM_STABLE=19
+ BASE_URL=http://apt.llvm.org
+ NEW_DEBIAN_DISTROS=("trixie" "unstable")
+ LLVM_VERSION=19
+ ALL=0
++ lsb_release -is
+ DISTRO=Ubuntu
++ lsb_release -cs
+ VERSION_CODENAME=jammy
++ lsb_release -sr
+ VERSION=22.04
+ UBUNTU_CODENAME=
+ CODENAME_FROM_ARGUMENTS=
+ source /etc/os-release
++ PRETTY_NAME='Ubuntu 22.04.5 LTS'
++ NAME=Ubuntu
++ VERSION_ID=22.04
++ VERSION='22.04.5 LTS (Jammy Jellyfish)'
++ VERSION_CODENAME=jammy
++ ID=ubuntu
++ ID_LIKE=debian
++ HOME_URL=https://www.ubuntu.com/
++ SUPPORT_URL=https://help.ubuntu.com/
++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/
++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy
++ UBUNTU_CODENAME=jammy
+ DISTRO=ubuntu
+ is_new_debian=0
+ [[ ubuntu == \d\e\b\i\a\n ]]
+ needed_binaries=(lsb_release wget gpg)
+ [[ 0 -eq 0 ]]
+ needed_binaries+=(add-apt-repository)
+ missing_binaries=()
+ using_curl=
+ for binary in "${needed_binaries[@]}"
+ command -v lsb_release
+ for binary in "${needed_binaries[@]}"
+ command -v wget
+ for binary in "${needed_binaries[@]}"
+ command -v gpg
+ for binary in "${needed_binaries[@]}"
+ command -v add-apt-repository
+ [[ 0 -gt 0 ]]
+ case ${DISTRO} in
+ [[ -n jammy ]]
+ CODENAME=jammy
+ [[ -n jammy ]]
+ LINKNAME=-jammy
+ '[' 2 -ge 1 ']'
+ '[' 2 '!=' - ']'
+ '[' 20 '!=' all ']'
+ LLVM_VERSION=20
+ OPTIND=2
+ '[' 2 -ge 2 ']'
+ '[' all == all ']'
+ ALL=1
+ OPTIND=3
+ getopts :hm:n: arg
+ [[ 0 -ne 0 ]]
+ declare -A LLVM_VERSION_PATTERNS
+ LLVM_VERSION_PATTERNS[9]=-9
+ LLVM_VERSION_PATTERNS[10]=-10
+ LLVM_VERSION_PATTERNS[11]=-11
+ LLVM_VERSION_PATTERNS[12]=-12
+ LLVM_VERSION_PATTERNS[13]=-13
+ LLVM_VERSION_PATTERNS[14]=-14
+ LLVM_VERSION_PATTERNS[15]=-15
+ LLVM_VERSION_PATTERNS[16]=-16
+ LLVM_VERSION_PATTERNS[17]=-17
+ LLVM_VERSION_PATTERNS[18]=-18
+ LLVM_VERSION_PATTERNS[19]=-19
+ LLVM_VERSION_PATTERNS[20]=-20
+ LLVM_VERSION_PATTERNS[21]=
+ '[' '!' _ ']'
+ LLVM_VERSION_STRING=-20
+ [[ -n jammy ]]
+ REPO_NAME='deb http://apt.llvm.org/jammy/  llvm-toolchain-jammy-20 main'
+ wget -q --method=HEAD http://apt.llvm.org/jammy
+ [[ ! -f /etc/apt/trusted.gpg.d/apt.llvm.org.asc ]]
++ apt-key list
++ grep -i llvm
+ [[ -z /etc/apt/trusted.gpg.d/apt.llvm.org.asc
uid           [ unknown] Sylvestre Ledru - Debian LLVM packages <sylvestre@debian.org> ]]
+ [[ jammy == \b\o\o\k\w\o\r\m ]]
+ [[ 0 -eq 1 ]]
+ add-apt-repository -y 'deb http://apt.llvm.org/jammy/  llvm-toolchain-jammy-20 main'
Repository: 'deb http://apt.llvm.org/jammy/ llvm-toolchain-jammy-20 main'
Description:
Archive for codename: llvm-toolchain-jammy-20 components: main
More info: http://apt.llvm.org/jammy/
Adding repository.
Found existing deb entry in /etc/apt/sources.list.d/archive_uri-http_apt_llvm_org_jammy_-jammy.list
Adding deb entry to /etc/apt/sources.list.d/archive_uri-http_apt_llvm_org_jammy_-jammy.list
Found existing deb-src entry in /etc/apt/sources.list.d/archive_uri-http_apt_llvm_org_jammy_-jammy.list
Adding disabled deb-src entry to /etc/apt/sources.list.d/archive_uri-http_apt_llvm_org_jammy_-jammy.list
Get:2 https://download.docker.com/linux/ubuntu jammy InRelease [48.8 kB]       
Hit:1 https://apt.llvm.org/jammy llvm-toolchain-jammy-20 InRelease             
Hit:3 https://repo.download.nvidia.com/jetson/common r36.4 InRelease           
Hit:4 http://ports.ubuntu.com/ubuntu-ports jammy InRelease                     
Hit:5 https://ppa.launchpadcontent.net/deadsnakes/ppa/ubuntu jammy InRelease
Hit:6 https://ppa.launchpadcontent.net/mozillateam/ppa/ubuntu jammy InRelease
Hit:7 http://ports.ubuntu.com/ubuntu-ports jammy-updates InRelease
Ign:8 https://repo.download.nvidia.com/jetson/t210 r36.4 InRelease
Hit:9 http://ports.ubuntu.com/ubuntu-ports jammy-backports InRelease
Hit:10 http://ports.ubuntu.com/ubuntu-ports jammy-security InRelease
Err:11 https://repo.download.nvidia.com/jetson/t210 r36.4 Release
  404  Not Found [IP: 2.19.176.153 443]
Reading package lists... Done
E: The repository 'https://repo.download.nvidia.com/jetson/t210 r36.4 Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
+ apt-get update
Get:2 https://download.docker.com/linux/ubuntu jammy InRelease [48.8 kB]       
Hit:3 https://repo.download.nvidia.com/jetson/common r36.4 InRelease           
Hit:1 https://apt.llvm.org/jammy llvm-toolchain-jammy-20 InRelease             
Hit:4 https://ppa.launchpadcontent.net/deadsnakes/ppa/ubuntu jammy InRelease   
Ign:5 https://repo.download.nvidia.com/jetson/t210 r36.4 InRelease             
Err:6 https://repo.download.nvidia.com/jetson/t210 r36.4 Release
  404  Not Found [IP: 2.19.176.153 443]
Hit:7 https://ppa.launchpadcontent.net/mozillateam/ppa/ubuntu jammy InRelease
Hit:8 http://ports.ubuntu.com/ubuntu-ports jammy InRelease
Hit:9 http://ports.ubuntu.com/ubuntu-ports jammy-updates InRelease
Hit:10 http://ports.ubuntu.com/ubuntu-ports jammy-backports InRelease
Hit:11 http://ports.ubuntu.com/ubuntu-ports jammy-security InRelease
Reading package lists... Done
E: The repository 'https://repo.download.nvidia.com/jetson/t210 r36.4 Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.

Hi,

Err:11 https://repo.download.nvidia.com/jetson/t210 r36.4 Release

t210 (Xavier) is not supported on r36 branch (JetPack 6).
Which device do you use?

It’s expected to have t234 for Orin series:
/etc/apt/sources.list.d/nvidia-l4t-apt-source.list

deb https://repo.download.nvidia.com/jetson/common r36.4 main
deb https://repo.download.nvidia.com/jetson/t234 r36.4 main
deb https://repo.download.nvidia.com/jetson/ffmpeg r36.4 main

Thanks.

Hi
Im using Nvidia Jetson Nano Developer kit

I fixed the above error and i got this as the error

ERROR: An error occurred during the fetch of repository 'python_version_repo':
   Traceback (most recent call last):
	File "/root/.cache/bazel/_bazel_root/fbc06f9baef46cade6e35d9e4137e37c/external/local_xla/third_party/py/python_repo.bzl", line 37, column 13, in _python_repository_impl
		fail("""
Error in fail: 
Could not find requirements_lock.txt file matching specified Python version.
Specified python version: .
Python versions with available requirement_lock.txt files: 3.9, 3.10, 3.11, 3.12, 3.13
Please check python_init_repositories() in your WORKSPACE file.
ERROR: /opt/tensorflow/WORKSPACE:39:25: fetching python_repository rule //external:python_version_repo: Traceback (most recent call last):
	File "/root/.cache/bazel/_bazel_root/fbc06f9baef46cade6e35d9e4137e37c/external/local_xla/third_party/py/python_repo.bzl", line 37, column 13, in _python_repository_impl
		fail("""
Error in fail: 
Could not find requirements_lock.txt file matching specified Python version.
Specified python version: .
Python versions with available requirement_lock.txt files: 3.9, 3.10, 3.11, 3.12, 3.13
Please check python_init_repositories() in your WORKSPACE file.
ERROR: Error computing the main repository mapping: no such package '@python_version_repo//': 
Could not find requirements_lock.txt file matching specified Python version.
Specified python version: .
Python versions with available requirement_lock.txt files: 3.9, 3.10, 3.11, 3.12, 3.13
Please check python_init_repositories() in your WORKSPACE file.
Loading: 

and i tried using the docker containers as well and the https://pypi.jetson-ai-lab.dev/jp6/cu126 link that is defined is not working

i saw another forum where they had asked to change to https://pypi.jetson-ai-lab.dev/io but i do not know how to change it inside the container

Irrespective some say even that link is not working

What can i do after here, please help

this was the forum

Fixed the above error by changing

export HERMETIC_PYTHON_VERSION="${PYTHON_VERSION_MAJOR}.${PYTHON_VERSION_MINOR}"

to

export HERMETIC_PYTHON_VERSION="3.10"

and this error comes up even though i have bazel 6.5

ERROR: An error occurred during the fetch of repository 'standalone_cuda_redist_json':
   Traceback (most recent call last):
	File "/root/.cache/bazel/_bazel_root/fbc06f9baef46cade6e35d9e4137e37c/external/rules_ml_toolchain/third_party/gpus/nvidia_common_rules.bzl", line 574, column 41, in _redist_json_impl
		redist_version = _get_redist_version(
	File "/root/.cache/bazel/_bazel_root/fbc06f9baef46cade6e35d9e4137e37c/external/rules_ml_toolchain/third_party/gpus/nvidia_common_rules.bzl", line 568, column 37, in _get_redist_version
		redist_version = get_env_var(repository_ctx, redist_version_env_var)
	File "/root/.cache/bazel/_bazel_root/fbc06f9baef46cade6e35d9e4137e37c/external/rules_ml_toolchain/third_party/gpus/nvidia_common_rules.bzl", line 51, column 26, in get_env_var
		return repository_ctx.getenv(name)
Error: 'repository_ctx' value has no field or method 'getenv'
ERROR: /opt/tensorflow/WORKSPACE:104:26: fetching _redist_json rule //external:standalone_cuda_redist_json: Traceback (most recent call last):
	File "/root/.cache/bazel/_bazel_root/fbc06f9baef46cade6e35d9e4137e37c/external/rules_ml_toolchain/third_party/gpus/nvidia_common_rules.bzl", line 574, column 41, in _redist_json_impl
		redist_version = _get_redist_version(
	File "/root/.cache/bazel/_bazel_root/fbc06f9baef46cade6e35d9e4137e37c/external/rules_ml_toolchain/third_party/gpus/nvidia_common_rules.bzl", line 568, column 37, in _get_redist_version
		redist_version = get_env_var(repository_ctx, redist_version_env_var)
	File "/root/.cache/bazel/_bazel_root/fbc06f9baef46cade6e35d9e4137e37c/external/rules_ml_toolchain/third_party/gpus/nvidia_common_rules.bzl", line 51, column 26, in get_env_var
		return repository_ctx.getenv(name)
Error: 'repository_ctx' value has no field or method 'getenv'
ERROR: Error computing the main repository mapping: no such package '@standalone_cuda_redist_json//': 'repository_ctx' value has no field or method 'getenv'
Loading: 
vishal@vishal-desktop:~/Desktop/jetson-containers/packages/ml/tensorflow$ bazel --version
bazel 6.5.0

This is the error i was talking about when i run jetson-containers build tensorflow

Im trying to change the url in the dockerfile and install.sh and its still not working

Unpacking python3.10-dev (3.10.12-1~22.04.10) ...
Setting up libexpat1-dev:arm64 (2.4.7-1ubuntu0.6) ...
Setting up zlib1g-dev:arm64 (1:1.2.11.dfsg-2ubuntu9.2) ...
Setting up libpython3.10-dev:arm64 (3.10.12-1~22.04.10) ...
Setting up python3.10-dev (3.10.12-1~22.04.10) ...
+ which python3.10
/usr/bin/python3.10
+ return_code=0
+ set -e
+ '[' 0 '!=' 0 ']'
++ lsb_release -rs
+ distro=22.04
+ '[' 22.04 = 24.04 ']'
+ '[' 22.04 = 20.04 ']'
+ '[' 22.04 = 18.04 ']'
+ python3.10
+ curl -sS https://bootstrap.pypa.io/get-pip.py
Using pip 25.1.1 from /tmp/python/tmpcxj86n3c/pip.zip/pip (python 3.10)
Looking in indexes: https://pypi.jetson-ai-lab.dev/jp6/cu126, https://pypi.jetso
n-ai-lab.dev/jp6/cu126
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status
=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.con
nection.HTTPSConnection object at 0xffff7dff90c0>: Failed to establish a new con
nection: [Errno 113] No route to host')': /jp6/cu126/pip/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status
=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.con
nection.HTTPSConnection object at 0xffff7dffba90>: Failed to establish a new con
nection: [Errno 113] No route to host')': /jp6/cu126/pip/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status
=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.con
nection.HTTPSConnection object at 0xffff7dffbf10>: Failed to establish a new con
nection: [Errno 113] No route to host')': /jp6/cu126/pip/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status
=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.con
nection.HTTPSConnection object at 0xffff7dffb520>: Failed to establish a new con
nection: [Errno 113] No route to host')': /jp6/cu126/pip/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status
=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.con
nection.HTTPSConnection object at 0xffff7dffb6d0>: Failed to establish a new con
nection: [Errno 113] No route to host')': /jp6/cu126/pip/
ERROR: Could not find a version that satisfies the requirement pip (from version
s: none)
ERROR: No matching distribution found for pip
The command '/bin/sh -c $TMP/install.sh' returned a non-zero code: 1
[20:22:10] Failed building:  tensorflow

Traceback (most recent call last):
  File "/home/vishal/Desktop/jetson-containers/jetson_containers/build.py", line
 129, in <module>
    build_container(**vars(args))
  File "/home/vishal/Desktop/jetson-containers/jetson_containers/container.py", 
line 225, in build_container
    status = subprocess.run(cmd.replace(_NEWLINE_, ' '), executable='/bin/bash',
 shell=True, check=True)
  File "/usr/lib/python3.10/subprocess.py", line 526, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command 'DOCKER_BUILDKIT=0 docker build --network
=host   --tag tensorflow:r36.4.tegra-aarch64-cu126-22.04-python   --file /home/v
ishal/Desktop/jetson-containers/packages/build/python/Dockerfile   --build-arg B
ASE_IMAGE=tensorflow:r36.4.tegra-aarch64-cu126-22.04-cudnn_9.3   --build-arg PYT
HON_VERSION="3.10"    /home/vishal/Desktop/jetson-containers/packages/build/pyth
on 2>&1 | tee /home/vishal/Desktop/jetson-containers/logs/20250730_202125/build/
05o12_tensorflow_r36.4.tegra-aarch64-cu126-22.04-python.txt; exit ${PIPESTATUS[0
]}' returned non-zero exit status 1.
 

Hi,

We don’t support Jetson Nano on JetPack 6 and CUDA 12.6.
So you should use r32 branch instead.

Thanks.

isnt r32 for jetpack 4?
should i reflash and use it or?

Hi,

Err:11 https://repo.download.nvidia.com/jetson/t210 r36.4 Release

The above error is related to finding the t210 package on the r36..4 branch.
This is not supported.

Jetson Nano is only supported with JetPack 4, which uses the r32 branch.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.