@dusy_nv, I started from scratch with a new Jetpack 6.0 DP image. I have plenty of blank SD cards.
I went through the Hello AI World
build-from-source again because I’m familiar with this approach. After the cmake ../
step in Configuring with CMake
, the output in the terminal shows the following. More here: log_cmake.txt (51.9 KB)
File “/usr/local/lib/python3.10/dist-packages/torch/utils/cpp_extension.py”, line 525, in build_extensions
_check_cuda_version(compiler_name, compiler_version)
File “/usr/local/lib/python3.10/dist-packages/torch/utils/cpp_extension.py”, line 413, in _check_cuda_version
raise RuntimeError(CUDA_MISMATCH_MESSAGE.format(cuda_str_version, torch.version.cuda))
RuntimeError:
The detected CUDA version (11.5) mismatches the version that was used to compile
PyTorch (12.2). Please make sure to use the same CUDA versions.
[jetson-inference] installation complete, exiting with status code 0
[jetson-inference] to run this tool again, use the following commands:
$ cd /build
$ ./install-pytorch.sh
[Pre-build] Finished CMakePreBuild script
– Finished installing dependencies
– using patched FindCUDA.cmake
– Looking for pthread.h
– Looking for pthread.h - found
– Performing Test CMAKE_HAVE_LIBC_PTHREAD
– Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
– Found Threads: TRUE
– Found CUDA: /usr (found version “11.5”)
– CUDA version: 11.5
– CUDA 11.5 detected (aarch64), enabling SM_53 SM_62
– CUDA 11.5 detected (aarch64), enabling SM_72
– CUDA 11.5 detected (aarch64), enabling SM_87
– Found OpenCV: /usr (found version “4.8.0”) found components: core calib3d
– OpenCV version: 4.8.0
– OpenCV version >= 3.0.0, enabling OpenCV
CMake Warning at CMakeLists.txt:106 (find_package):
Could not find a configuration file for package “VPI” that is compatible
with requested version “2.0”.
The following configuration files were considered but not accepted:
/usr/lib/cmake/vpi3/vpi-config.cmake, version: 3.0.10
/lib/cmake/vpi3/vpi-config.cmake, version: 3.0.10
.
But the imaged 6.0 SD card only has cuda12. The cmake process is confused with cuda versions. `ll /usr/local’ shows this:
jet@sky:~$ ll /usr/local
drwxr-xr-x 11 root root 4096 Nov 30 16:33 ./
drwxr-xr-x 11 root root 4096 Feb 17 2023 …/
drwxr-xr-x 2 root root 4096 Jan 6 01:50 bin/
lrwxrwxrwx 1 root root 22 Nov 30 16:33 cuda → /etc/alternatives/cuda/
lrwxrwxrwx 1 root root 25 Nov 30 16:33 cuda-12 → /etc/alternatives/cuda-12/
drwxr-xr-x 12 root root 4096 Nov 30 16:33 cuda-12.2/
drwxr-xr-x 2 root root 4096 Feb 17 2023 etc/
drwxr-xr-x 2 root root 4096 Feb 17 2023 games/
drwxr-xr-x 4 root root 4096 Jan 6 01:50 include/
drwxr-xr-x 4 root root 4096 Jan 6 01:50 lib/
lrwxrwxrwx 1 root root 9 Feb 17 2023 man → share/man/
drwxr-xr-x 2 root root 4096 Feb 17 2023 sbin/
drwxr-xr-x 9 root root 4096 Jan 6 01:50 share/
drwxr-xr-x 2 root root 4096 Feb 17 2023 src/
.
I went through the remaining steps in Compiling the Project
to complete the AI World install. Then I circled back to the pytorch installation tool:
cd jetson-inference/build
./install-pytorch.sh
.
Only 1 package was listed for installation, `PyTorch 2.1 for Python 3.10’. I selected that again, Entered to continue, and it outputted this:
File “/usr/lib/python3.10/distutils/command/build_ext.py”, line 340, in run
self.build_extensions()
File “/usr/local/lib/python3.10/dist-packages/torch/utils/cpp_extension.py”, line 525, in build_extensions
_check_cuda_version(compiler_name, compiler_version)
File “/usr/local/lib/python3.10/dist-packages/torch/utils/cpp_extension.py”, line 413, in _check_cuda_version
raise RuntimeError(CUDA_MISMATCH_MESSAGE.format(cuda_str_version, torch.version.cuda))
RuntimeError:
The detected CUDA version (11.5) mismatches the version that was used to compile
PyTorch (12.2). Please make sure to use the same CUDA versions.
[jetson-inference] installation complete, exiting with status code 0
[jetson-inference] to run this tool again, use the following commands:
.
It still thinks cuda 11.5 is installed, even though ll /usr/loca
still shows cuda 12.2:
drwxr-xr-x 11 root root 4096 Nov 30 16:33 ./
drwxr-xr-x 11 root root 4096 Feb 17 2023 …/
drwxr-xr-x 2 root root 4096 Jan 6 01:50 bin/
lrwxrwxrwx 1 root root 22 Nov 30 16:33 cuda → /etc/alternatives/cuda/
lrwxrwxrwx 1 root root 25 Nov 30 16:33 cuda-12 → /etc/alternatives/cuda-12/
drwxr-xr-x 12 root root 4096 Nov 30 16:33 cuda-12.2/
drwxr-xr-x 2 root root 4096 Feb 17 2023 etc/
…
.
I checked in python3 session, and of course Torchvision is not installed:
jet@sky:~$ python3
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] on linux
Type “help”, “copyright”, “credits” or “license” for more information.
import torch
torch.version
‘2.1.0’
torch.cuda.is_available()
True
import torchvision
Traceback (most recent call last):
File “”, line 1, in
ModuleNotFoundError: No module named ‘torchvision’
.
Is the cmake
step the problem? Or the 6.0 DP image?