OpenCV 4.2.0 and CuDNN for Jetson Nano?

If you use the Docker image, It’s designed to work with any Tegra board. I use a Xavier to build. The libraries that are different are bind mounted at runtime with --rubtime nvidia. The approach has issues but that’s one of the advantages. The script also runs on all Tegra boards.

It does build in Docker. It’s configured sucessfully and I expect it’ll work fine when i run some basic tests. I will push it here and update this thread when it’s done (as well as update the repo with the changes). TYVM @Honey_Patouceul for the solution, and you @Andrey1984 for testing it out.

edit: so it works. pushing the image but my internet has been horrible lately. yesterday it took nearly and hour to push an image that usually takes seconds.

Got it working with latest JetPack 4.4 (CUDA 10.2.39, cuDNN: 8.0.0.145), and OpenCV 4.3.0

Cmake parameters used:

cmake \
  -D WITH_CUDA=ON \
  -D WITH_CUDNN=ON \
  -D WITH_V4L=ON \
  -D OPENCV_DNN_CUDA=ON \
  -D CUDNN_VERSION='8.0' \
  -D CUDNN_INCLUDE_DIR='/usr/include/' \
  -D ENABLE_FAST_MATH=1 \
  -D CUDA_FAST_MATH=1 \
  -D CUDA_ARCH_BIN="5.3" \
  -D CUDA_ARCH_PTX="" \
  -D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib-4.3.0/modules \
  -D WITH_GSTREAMER=ON \
  -D WITH_LIBV4L=ON \
  -D BUILD_opencv_python2=ON \
  -D BUILD_opencv_python3=ON \
  -D BUILD_TESTS=OFF \
  -D BUILD_PERF_TESTS=OFF \
  -D BUILD_EXAMPLES=OFF \
  -D CMAKE_BUILD_TYPE=RELEASE \
  -D CMAKE_INSTALL_PREFIX=/usr/local \
  ..

Verified using $ sudo jtop

NVIDIA Jetson Nano (Developer Kit Version) - Jetpack 4.4 DP [L4T 32.4.2]

 - Up Time:        0 days 5:19:31                           Version: 2.1.0
 - Jetpack:        4.4 DP [L4T 32.4.2]                       Author: Raffaello Bonghi
 - Board:                                                    e-mail: raffaello@rnext.it
   * Type:           Nano (Developer Kit Version)
   * SOC Family:     tegra210     ID: 33
   * Module:         P3448-0000   Board: P3449-0000
   * Code Name:      porg
   * Cuda ARCH:      5.3
   * Serial Number:  1422219007533
   * Board ids:      3448
 - Libraries:                                           - Hostname:    jetson-nano
   * CUDA:         10.2.89                              - Interfaces:
   * OpenCV:       4.3.0  compiled CUDA: YES              * eth0:      192.168.0.45
   * TensorRT:     7.1.0.16
   * VPI:          0.2.0
   * VisionWorks:  1.6.0.501
   * Vulkan:       1.2.70
   * cuDNN:        8.0.0.145
1 Like

I forgot to update, but images are built at the link with cudnn enabled:

https://hub.docker.com/r/mdegans/tegra-opencv

The build script and Dockerfile is linked off that.

1 Like

Thanks a lot!!!

I have Jetson Nano.
The Jetpack D4.4 comes with cuDNN 8.0.0 which does not support cudnn backend.
I rolled back to JetPack 4.3 with chDNN 7.6.3 and all works fine.

NVIDIA - why don’t you include an opencv 4.2.0 compiled with cuda and cudnn with your Jetpack release?
Also, please fix cudnn 8.0.0.

Best,
Mickey

NVIDIA - why don’t you include an opencv 4.2.0 compiled with cuda and cudnn with your Jetpack release?

I may be speaking out of turn, but I suspect there is some licensing or patent reason why they can’t.

Re: cudnn 8.0, opencv will build if you specify the version manually like this. Check out the docker branch if you want a more Docker appropriate script and Dockerfile. Built images here.

Thank you for your comment.
Kindly follow [Compiling Opencv 4.2.0 + CUDA on jetson nano board · Issue #16439 · opencv/opencv · GitHub](http://URL to backend support missing problem)

We have successfully compiled OpenCV 4.2.0 with cuda 10.2 (change in location of cudnn version) and cudnn 8.0.0.
When using it, it appears that NVIDIA (has nothing to do with licensing) did not include BACKEND support and this removal causes a runtime error of:

error: (-215:Assertion failed) preferableBackend != **DNN_BACKEND_OPENCV** || preferableTarget == DNN_TARGET_CPU || preferableTarget == DNN_TARGET_OPENCL || preferableTarget == DNN_TARGET_OPENCL_FP16 in function 'setUpNet'

Triggered by:
    CV_Assert(preferableBackend != DNN_BACKEND_OPENCV ||
                      preferableTarget == DNN_TARGET_CPU ||
                      preferableTarget == DNN_TARGET_OPENCL ||
                      preferableTarget == DNN_TARGET_OPENCL_FP16);

This support is present in previous versions of cuDNN and indeed when switching back to Jeppack 4.3 that has cuda 10.0 and cudnn 7.6.x the code runs fine without the assert.

Not sure I understand how compiling opencv with cuda and cudnn support can cause NVIDIA licensing issues when compiled without opencv_contrib. cuda and cudnn are nvidia’s stuff if I am not mistaken.

It would save us all alot of quest expeditions in getting it to run on Jetson environments.

2 Likes

You may also check this for what might be the root cause (not checked further yet).

1 Like

Re: licensing issue, it was just a guess, but I think it’s not the reason actually. Per @dusty_nv the reason is stability. Some tests fail, although fewer of them recently from what I’ve heard. YMMV.

Re: who wrote it, I think you’re right that Nvidia itself contributed. They’re in the LICENSE for contrib. (oddly in none of the headers).

Copyright (C) 2000-2018, Intel Corporation, all rights reserved.
Copyright (C) 2009-2011, Willow Garage Inc., all rights reserved.
Copyright (C) 2009-2015, NVIDIA Corporation, all rights reserved.
Copyright (C) 2010-2013, Advanced Micro Devices, Inc., all rights reserved.
Copyright (C) 2015-2018, OpenCV Foundation, all rights reserved.
Copyright (C) 2015-2016, Itseez Inc., all rights reserved.

I was not meaning more than about the workaround of specifying cudnn version in opencv configure step.
I’d tend to think that it is a better fix than mine.

[ 13%] Linking CXX shared library …/…/lib/libopencv_cudev.so
/usr/bin/ld: skipping incompatible /usr/local/cuda-10.2/lib64/libcudnn.so when searching for -lcudnn
/usr/bin/ld: skipping incompatible /usr/local/cuda-10.2/lib64/libcudnn.so when searching for -lcudnn
/usr/bin/ld: cannot find -lcudnn
collect2: error: ld returned 1 exit status
modules/cudev/CMakeFiles/opencv_cudev.dir/build.make:95: recipe for target ‘lib/libopencv_cudev.so.4.2.0’ failed
make[2]: *** [lib/libopencv_cudev.so.4.2.0] Error 1
CMakeFiles/Makefile2:2816: recipe for target ‘modules/cudev/CMakeFiles/opencv_cudev.dir/all’ failed
make[1]: *** [modules/cudev/CMakeFiles/opencv_cudev.dir/all] Error 2
Makefile:162: recipe for target ‘all’ failed
make: *** [all] Error 2

Im getting this error on make command , anyone please help

1 Like

@shijin.mtl

Which build script are you using, if any?

1 Like

I’m also experiencing a similar error, with Jetpack 4.4 and cuda 10.2.89, trying to install opencv 4.3.0 on Xavier NX:

cmake   -D WITH_CUDA=ON   -D WITH_CUDNN=ON   -D WITH_V4L=ON   -D OPENCV_DNN_CUDA=ON   -D CUDNN_VERSION='8.0'   -D CUDNN_INCLUDE_DIR='/usr/include/'   -D ENABLE_FAST_MATH=1   -D CUDA_FAST_MATH=1   -D CUDA_ARCH_BIN="5.3,6.2,7.2"   -D CUDA_ARCH_PTX=""   -D WITH_GSTREAMER=ON   -D WITH_LIBV4L=ON   -D BUILD_opencv_python2=ON   -D BUILD_opencv_python3=ON   -D BUILD_TESTS=OFF   -D BUILD_PERF_TESTS=OFF   -D BUILD_EXAMPLES=OFF   -D CMAKE_BUILD_TYPE=RELEASE   -D CMAKE_INSTALL_PREFIX=/usr/local   -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib-4.3.0/modules -D WITH_OPENGL=ON -D WITH_CUBLAS=ON -D OPENCV_GENERATE_PKGCONFIG=ON -D OPENCV_ENABLE_NONFREE=ON -D ENABLE_NEON=ON  ..

make -j6

I get:

[ 62%] Building CXX object modules/cudafilters/CMakeFiles/opencv_cudafilters.dir/src/filtering.cpp.o
[ 62%] Linking CXX shared library ../../lib/libopencv_cudafilters.so
[ 62%] Built target opencv_cudafilters
Makefile:162: recipe for target 'all' failed
make: *** [all] Error 2

Solved using: GitHub - mdegans/nano_build_opencv: Build OpenCV on Nvidia Jetson Nano

CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-11.0 -D CUDNN_INCLUDE_DIR=/usr/include -DCUDNN_LIBRARY=/usr/local/cuda/lib64/libcudnn.so.8 -D CUDNN_VERSION=8.0.3.33

This is work for me…

– NVIDIA CUDA: YES (ver 11.0, CUFFT CUBLAS FAST_MATH)
– NVIDIA GPU arch: 61
– NVIDIA PTX archs:

– cuDNN: YES (ver 8.0.3.33)