Hello,
I try to build a Docker image with OpenCV with CUDA support on my Jetson Nano GB Developer Kit.
I flashed the Jetpack 4.6.1 on the Jetson and I build a new docker image from the Nvidia L4T r32.7.1 docker image where I added openCV4.1.1 (the version included in the Jetpack 4.6.1) which I compiled with CUDA 10.2 support:
ARG L4T_FULL_VERSION=r32.7.1
ARG BASE_IMAGE=nvcr.io/nvidia/l4t-base:${L4T_FULL_VERSION}
FROM ${BASE_IMAGE} as base
ARG CUDA_VERSION=10.2
ARG OPENCV_VERSION=4.1.1
ARG L4T_MAJOR_VERSION=r32.7
WORKDIR /root
# reveal the CUDA location
RUN sh -c "echo '/usr/local/cuda/lib64' >> /etc/ld.so.conf.d/nvidia-tegra.conf"
RUN ldconfig
# install the dependencies
RUN apt-get update && apt-get install -q -y \
dialog apt-utils \
build-essential cmake git unzip pkg-config zlib1g-dev \
libjpeg-dev libjpeg8-dev libjpeg-turbo8-dev libpng-dev libtiff-dev \
libavcodec-dev libavformat-dev libswscale-dev libglew-dev \
libgtk2.0-dev libgtk-3-dev libcanberra-gtk* \
python-dev python-numpy python-pip \
python3-dev python3-numpy python3-pip \
libxvidcore-dev libx264-dev libgtk-3-dev \
libtbb2 libtbb-dev libdc1394-22-dev libxine2-dev \
gstreamer1.0-tools libv4l-dev v4l-utils v4l2ucp qv4l2 \
libgstreamer-plugins-base1.0-dev libgstreamer-plugins-good1.0-dev \
libavresample-dev libvorbis-dev libxine2-dev libtesseract-dev \
libfaac-dev libmp3lame-dev libtheora-dev libpostproc-dev \
libopencore-amrnb-dev libopencore-amrwb-dev \
libopenblas-dev libatlas-base-dev libblas-dev \
liblapack-dev liblapacke-dev libeigen3-dev gfortran \
libhdf5-dev protobuf-compiler \
libprotobuf-dev libgoogle-glog-dev libgflags-dev \
&& rm -rf /var/lib/apt/lists/*
# Install Cuda ToolKit
RUN echo "deb https://repo.download.nvidia.com/jetson/common $L4T_MAJOR_VERSION main" > /etc/apt/sources.list.d/nvidia-l4t-apt-source.list
RUN echo "deb https://repo.download.nvidia.com/jetson/t210 $L4T_MAJOR_VERSION main" >> /etc/apt/sources.list.d/nvidia-l4t-apt-source.list
RUN apt-key adv --fetch-key https://repo.download.nvidia.com/jetson/jetson-ota-public.asc
RUN apt-get update && apt-get install -q -y \
cuda-tools-10-2 cuda-libraries-10-2 \
&& rm -rf /var/lib/apt/lists/*
RUN ldconfig
# download the latest version
RUN wget -O opencv.zip https://github.com/opencv/opencv/archive/$OPENCV_VERSION.zip
RUN wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/$OPENCV_VERSION.zip
# unpack
RUN unzip opencv.zip && unzip opencv_contrib.zip && mv opencv-$OPENCV_VERSION opencv && mv opencv_contrib-$OPENCV_VERSION opencv_contrib && rm opencv.zip && rm opencv_contrib.zip
# set install dir
WORKDIR /root/opencv
RUN mkdir build
WORKDIR /root/opencv/build
# Reference: https://hub.docker.com/r/mdegans/tegra-opencv
# https://forums.developer.nvidia.com/t/opencv-4-2-0-and-cudnn-for-jetson-nano/112281/44
RUN cmake \
-D CMAKE_LIBRARY_PATH=/usr/local/cuda/lib64/stubs \
-D BUILD_EXAMPLES=OFF \
-D BUILD_opencv_python2=OFF \
-D BUILD_opencv_python3=OFF \
-D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D CUDA_ARCH_BIN=5.3,6.2,7.2 \
-D CUDA_ARCH_PTX= \
-D CUDA_FAST_MATH=ON \
-D EIGEN_INCLUDE_PATH=/usr/include/eigen3 \
-D ENABLE_NEON=ON \
-D OPENCV_ENABLE_NONFREE=ON \
-D OPENCV_EXTRA_MODULES_PATH=/root/opencv_contrib/modules \
-D OPENCV_GENERATE_PKGCONFIG=ON \
-D WITH_CUBLAS=ON \
-D WITH_CUDA=ON \
# -D WITH_CUDNN=ON \
# -D CUDNN_VERSION='8.0' \
# -D OPENCV_DNN_CUDA=ON \
-D WITH_GSTREAMER=ON \
-D WITH_LIBV4L=ON \
-D WITH_OPENGL=ON \
-D INSTALL_C_EXAMPLES=OFF \
-D INSTALL_TESTS=OFF \
..
# run make
RUN make -j1 && make install && make clean
RUN ldconfig
WORKDIR /root
RUN rm -R opencv opencv_contrib
ARG REVISION
LABEL revision=${REVISION}
The docker image is build without any issue. So, in order to check if everything was ok, I tried this code in the docker container of the built image :
#include <iostream>
using namespace std;
#include <opencv2/core.hpp>
using namespace cv;
#include <opencv2/cudaarithm.hpp>
using namespace cv::cuda;
int main()
{
printShortCudaDeviceInfo(getDevice());
int cuda_devices_number = getCudaEnabledDeviceCount();
cout << "CUDA Device(s) Number: "<< cuda_devices_number << endl;
DeviceInfo _deviceInfo;
bool _isd_evice_compatible = _deviceInfo.isCompatible();
cout << "CUDA Device(s) Compatible: " << _isd_evice_compatible << endl;
return 0;
}
I successfully compiled it using:
g++ check_cuda.cpp -o check_cuda `pkg-config opencv4 --cflags --libs`
But when I try to launch it, I have the following error message :
terminate called after throwing an instance of 'cv::Exception'
what(): OpenCV(4.1.1) /root/opencv/modules/core/src/cuda_info.cpp:84: error: (-217:Gpu API call) CUDA driver version is insufficient for CUDA runtime version in function 'getDevice'
I don’t understand why since I used the last official L4T image for Jetson Nano (containing Cuda 10.2) and I installed the same OpenCV version that the one included in the last Jetpack.
I build OpenCV for Cuda Arch Bin 5.3, which, if I’m not wrong, is correct for Jetson Nano.
So, I don’t have any idea any more…
For information the Cuda version (on the Jetson, and in the container) is the same:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Sun_Feb_28_22:34:44_PST_2021
Cuda compilation tools, release 10.2, V10.2.300
Build cuda_10.2_r440.TC440_70.29663091_0
Thanks for your help.