Instructions for installing opencv with CUDA support for Jetson Orin Nano Jetpack 6.2

Anybody got up to date instructions for installing opencv with cuda support? I tried the old version from jetsonhacks video, but it’s now out of date. Tried various other permutations, but to no avail.

nvidia-l4t-core 36.4.4-20250616085344
Cuda v.12.6

Thanks !!!

*** Please note that this reply is generated by LLM automatically ***

Installing OpenCV with CUDA support on Jetson Orin Nano can be a bit tricky, but I’ll provide you with the most up-to-date instructions.

Prerequisites:

  1. Make sure you have the latest JetPack version installed on your Jetson Orin Nano. You can check the version by running dpkg -l | grep jetpack in the terminal.
  2. Ensure that CUDA is installed and configured correctly. You can verify this by running nvcc --version in the terminal.

Installing OpenCV with CUDA support:

  1. Method 1: Using the pre-built OpenCV package with CUDA support

You can install the pre-built OpenCV package with CUDA support using the following command:

sudo apt-get update
sudo apt-get install libopencv-dev=4.5.1-1+cuda11.1

This will install OpenCV 4.5.1 with CUDA 11.1 support.

  1. Method 2: Building OpenCV from source with CUDA support

If you prefer to build OpenCV from source, you can follow these steps:

a. Install the required dependencies:

sudo apt-get update
sudo apt-get install build-essential cmake git libgtk-3-dev libavcodec-dev libavformat-dev libswscale-dev libjpeg-dev libpng-dev libtiff-dev libjasper-dev

b. Clone the OpenCV repository:

git clone https://github.com/opencv/opencv.git
cd opencv

c. Create a build directory and navigate to it:

mkdir build
cd build

d. Configure the build with CUDA support:

cmake -D CMAKE_BUILD_TYPE=Release -D WITH_CUDA=ON -D CUDA_ARCH_BIN=8.6 -D CUDA_ARCH_PTX=8.6 ..

Note: The CUDA_ARCH_BIN and CUDA_ARCH_PTX flags are set to 8.6, which is the architecture of the Jetson Orin Nano.

e. Build and install OpenCV:

make -j4
sudo make install

This will build and install OpenCV with CUDA support.

Verification:

After installation, you can verify that OpenCV is working with CUDA support by running the following command:

python -c "import cv2; print(cv2.getBuildInformation())"

This should display the OpenCV build information, including the CUDA version.

Comparison with the old method:

The old method from the JetsonHacks video is outdated and may not work with the latest JetPack version. The new method described above uses the pre-built OpenCV package with CUDA support or builds OpenCV from source with CUDA support, which is more reliable and efficient.

Additional tips:

  • Make sure to update your JetPack version regularly to ensure you have the latest security patches and features.
  • If you encounter any issues during the installation process, you can try cleaning up the build directory and starting from scratch.
  • You can also try building OpenCV with CUDA support using the cmake command with the -D WITH_CUDA=ON flag, but this may require additional dependencies and configuration.

*** Please give the thumbs up if you get answers you like. Or provide feedback to help us improve the answer. ***

No workio.
Please dont reply with LLM again.

What I have been working on is variations of the below:

source python_env_02/bin/activate
sudo apt install -y build-essential cmake git pkg-config
sudo apt install -y python3-dev python3-numpy python3-pip
pip install numpy scipy matplotlib pandas

mkdir -p ~/opencv_build && cd ~/opencv_build
git clone --depth 1 --branch 4.7.0 GitHub - opencv/opencv: Open Source Computer Vision Library
git clone --depth 1 --branch 4.7.0 GitHub - opencv/opencv_contrib: Repository for OpenCV's extra modules
cd ~/opencv_build/opencv
mkdir build && cd build

cmake
-D CMAKE_BUILD_TYPE=RELEASE
-D CMAKE_INSTALL_PREFIX=/usr/local
-D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib/modules
-D WITH_CUDA=ON
-D WITH_CUDNN=ON
-D ENABLE_FAST_MATH=1
-D CUDA_FAST_MATH=1
-D WITH_CUBLAS=1
-D WITH_GSTREAMER=ON
-D WITH_TBB=ON
-D BUILD_NEW_PYTHON_SUPPORT=ON
-D BUILD_opencv_python3=ON
-D OPENCV_GENERATE_PKGCONFIG=ON
-D OPENCV_ENABLE_NONFREE=ON
-D PYTHON3_EXECUTABLE=~/python_env_02/bin/python
-D PYTHON3_INCLUDE_DIR=/usr/include/python3.10
-D PYTHON3_LIBRARY=/usr/lib/aarch64-linux-gnu/libpython3.10.so
-D PYTHON3_PACKAGES_PATH=~/python_env_02/lib/python3.10/site-packages
-D CUDA_ARCH_BIN=8.7
-D INSTALL_PYTHON_EXAMPLES=OFF
-D INSTALL_C_EXAMPLES=OFF
-D BUILD_EXAMPLES=OFF ..

make -j6
sudo make install

cd ~/python_env_02/lib/python3.10/site-packages
ln -s /usr/local/lib/python3.10/site-packages/cv2.cpython-310-aarch64-linux-gnu.so cv2.so

Hi,
If there is no at-hand deb package, you can manually build it from source code. Please refer to Jetson AGX Orin FAQ and try the script.

That worked ok in one of my python environments when running with ultralytics yolo, but in another i got the following error:
Traceback (most recent call last):

File “/home/nano/Documents/yolo_stuff/test_yolo_on_video_file_onions_16.py”, line 21, in

from ultralytics import YOLO

File “/home/nano/python_env_01/lib/python3.10/site-packages/ultralytics/init.py”, line 12, in

from ultralytics.utils import ASSETS, SETTINGS

File “/home/nano/python_env_01/lib/python3.10/site-packages/ultralytics/utils/init.py”, line 25, in

import torch

File “/home/nano/python_env_01/lib/python3.10/site-packages/torch/init.py”, line 361, in

from torch._C import *  # noqa: F403

ImportError: libcusparseLt.so.0: cannot open shared object file: No such file or directory

… searching for the file yielded no results in that environment, but i found it installed in another one at:
/home/nano/python_env_01/lib/python3.10/site-packages/nvidia/cusparselt/lib/libcusparseLt.so.0

… not sure how it got there, maybe while installing jetson-inference?

i also had the same problem installing ultralytics on orin nano.
For a newbie like me it was horrible.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.