I am using Jetson Orin Nano with JetPack 6.0 Developer Kit installed.
The environment is as follows:
[OS]
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04.5 LTS
Release: 22.04
Codename: jammy
[Jetpack version]
$ sudo apt show nvidia-jetpack
[sudo] password for jetson:
Package: nvidia-jetpack
Version: 6.0-b52
Priority: standard
Section: metapackages
Maintainer: NVIDIA Corporation
Installed-Size: 199 kB
Depends: nvidia-jetpack-runtime (= 6.0-b52), nvidia-jetpack-dev (= 6.0-b52)
Homepage: http://developer.nvidia.com/jetson
Download-Size: 29.3 kB
APT-Sources: https://repo.download.nvidia.com/jetson/common r36.2/main arm64 Packages
Description: NVIDIA Jetpack Meta Package
[L4T version]
$ cat /etc/nv_tegra_release
# R36 (release), REVISION: 2.0, GCID: 35084178, BOARD: generic, EABI: aarch64, DATE: Tue Dec 19 05:55:03 UTC 2023
# KERNEL_VARIANT: oot
TARGET_USERSPACE_LIB_DIR=nvidia
TARGET_USERSPACE_LIB_DIR_PATH=usr/lib/aarch64-linux-gnu/nvidia
[CUDA]
$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue_Aug_15_22:08:11_PDT_2023
Cuda compilation tools, release 12.2, V12.2.140
Build cuda_12.2.r12.2/compiler.33191640_0
Next, I installed Docker on a Jetson Orin Nano (JetPack 6.0 Developer Kit) and launched the container using the l4t-jetpack:r35.4.1 container image.
$ docker run -it --name=r35.4.1-test --net=host --runtime nvidia \
-v /tmp/.X11-unix/:/tmp/.X11-unix \
-v /home/jetson/Code:/home \
-v /usr/src/jetson_multimedia_api:/usr/src/jetson_multimedia_api \
nvcr.io/nvidia/l4t-jetpack:r35.4.1
Next, I installed Torch in a container using the following code.
apt-get update && apt-get upgrade -y
apt-get install -y curl python3-pip vim git wget ffmpeg libopencv-dev
python3 -m pip install -U pip && python3 -m pip install aiohttp 'numpy<2' && python3 -m pip install -U protobuf
apt-get -y install libopenblas-dev libopenblas-base libopenmpi-dev libomp-dev
python3 -m pip install 'Cython<3' cmake
python3 -m pip install torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl
Finally, I checked to see if Pytorch recognizes CUDA.
[torch_check.py]
import torch
print(torch.__version__)
print(torch.cuda.is_available())
The above was executed in code.
$ python3 torch_check.py
2.1.0a0+41361538.nv23.06
False
The environment in the container is as follows
[Jetpack version]
$ apt show nvidia-jetpack
Package: nvidia-jetpack
Version: 5.1.2-b104
Priority: standard
Section: metapackages
Maintainer: NVIDIA Corporation
Installed-Size: 199 kB
Depends: nvidia-jetpack-runtime (= 5.1.2-b104), nvidia-jetpack-dev (= 5.1.2-b104)
Homepage: http://developer.nvidia.com/jetson
Download-Size: 29.3 kB
APT-Sources: https://repo.download.nvidia.com/jetson/common r35.4/main arm64 Packages
Description: NVIDIA Jetpack Meta Package
[L4T version]
$ cat /etc/nv_tegra_release
# R36 (release), REVISION: 2.0, GCID: 35084178, BOARD: generic, EABI: aarch64, DATE: Tue Dec 19 05:55:03 UTC 2023
# KERNEL_VARIANT: oot
TARGET_USERSPACE_LIB_DIR=nvidia
TARGET_USERSPACE_LIB_DIR_PATH=usr/lib/aarch64-linux-gnu/nvidia
[CUDA]
$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Sun_Oct_23_22:16:07_PDT_2022
Cuda compilation tools, release 11.4, V11.4.315
Build cuda_11.4.r11.4/compiler.31964100_0
Why doesn’t Pytorch recognize CUDA?
How can I get Pytorch to recognize CUDA?
I would like to know.
By the way, I have confirmed that Pytorch can recognize CUDA when torch-2.1.0-cp310-cp310-linux_aarch64.whl is installed using the l4t-jetpack:r36.2.0 container.
Thank you.
[Articles that may be relevant]