now, OpenCV 4.2.0 support CuDNN. I have successfully compiled OpenCV 4.2.0 for my Windows 10 system using :
- OpenCV 4.2.0 (for sure)
- CUDA 10.2
- CuDNN 7.6.5 for CUDA 10.2
Concerning Jetson Nano, i have :
- CUDA 10.0
- CuDNN 7.6.? (the version i get with JetPck 4.2)
I was wondering if it is possible to compile OpenCV 4.2.0 with my versions of CUDA and CUDNN ?
I know how to do that but i would like to be sure the compiling operation will go to is end with my libraries versions.
Many thanks for your highlights.
It’s recommended to give it a try. Suppose it should work with building from the source.
You can start with our script which is verified for v4.1.1:
i will give it a try. I have already succeed to compile OpenCV 4.1.1 with the Nano so i planned to use the same script but this time, i will use those flags for the compiling options :
Do you think those flags are OK ?
I think i will also need to create swap file (4Go) to be able to compile OpenCV and i will use the 4 cores.
Your opinion ?
Have a nice day.
We don’t enable the OPENCV_DNN_CUDA before so we are not sure if it is working.
But you can give it a try.
I get CuDNN 7.3.1 on my Nano and of course, it is not suitable version for OpenCV 4.2.0. It needs CuDNN 7.5 or more.
Can Nidia provide a download link to CUDNN 7.6 provided with lastest JepPack ?
I know i can download latest Nano image and flash a new SD Card but this is really BORING because i will have to configure this new SD card with all the librairies i need.
This is REALLY REALLY BORING.
Many thanks in advance.
Hi Alain, you would need to re-flash your SD card with the JetPack 4.3 image for Nano. It isn’t possible to upgrade cuDNN package version independently, because it has dependencies on CUDA version and L4T version.
If you re-flash your SD card with JetPack 4.3, then you shouldn’t need to re-flash again, because JetPack 4.3 enables a new APT server for upgrading to future JetPack releases from the command line (ala ‘sudo apt upgrade’) instead of re-flashing. You will still need to re-flash to upgrade to JetPack 4.3 from previous releases, but once you are on JetPack 4.3 you should be all set for the future. For more info about the APT-based future updates, see:
yes, i faced the fact i have to flash a new SD card with JetPack 4.3.
I am downloading JetPack 4.3 image but it takes some hours because my internet connection is really bad.
I am a bit afraid with this new SD card because i will have to make so many installation operations to make my software work again !
Anyway, no other choice.
Are you working on Jetson Nano 2 ?
Will we get 5.X linux soon for the Nano ?
Well, still one and a half hour to get the JetPack 4.3.
Have a nice day Dustin.
Thanks Alain, with any luck hopefully this will be the last time you need to re-flash your SD card :)
We are working to get Jetson Xavier NX ready to ship, it is Nano-sized but with a lot more performance.
We don’t have plans to migrate to Linux 5.x kernel in the near future, it is in early planning.
If i work hard, maybe you will send me a Xavier NX for Xmas ! I am joking.
I don’t understand how that kind of sbc work. Do you need to plug the board in an other computer to be able to use it ?
Well, i will be back as soon as i will recover from my nervous break down, trying get the good settings for my new SD card.
You will be able to plug the Xavier NX module into a B01-revision Jetson Nano Developer Kit (945-13450-0000-100)
Ok, i will take a look at this.
it seems i have successfully compiled and installed OpenCV 4.2.0 with CuDNN on the Jetson Nano.
I did not really make tests because it is late and i am tired but i can import cv2 in Python 3 and print(cv2.version) says 4.2.0.
I used those flags with cmake :
cmake -D WITH_CUDA=ON -D WITH_CUDNN=ON -D OPENCV_DNN_CUDA=ON -D ENABLE_FAST_MATH=1 -D CUDA_FAST_MATH=1 -D CUDA_ARCH_BIN=“5.3,6.2,7.2” -D CUDA_ARCH_PTX=“” -D OPENCV_EXTRA_MODULES_PATH=…/…/opencv_contrib-4.2.0/modules -D WITH_GSTREAMER=ON -D WITH_LIBV4L=ON -D BUILD_opencv_python2=ON -D BUILD_opencv_python3=ON -D BUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_EXAMPLES=OFF -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local …
Need to make many other libraries installations with my brand new JetPack 4.3 SD card but for now, it’s ok.
Have a nice day.
i took a quick look. So, if i understand well, Jetson Xavier NX can replace the board in the Jetson Nano dev kit !
That’s really great.
But i guess the Nano you gave to me (thanks again) can’t manage the brand new Jetson Xavier NX. It is the first version of the Nano.
Well, too bad but i can do some work with the Nano so i will survive.
But first, i must sleep.
It’s very slow (like very very slow), and a lot of things fail. You will probably need a swapfile mounted as well. If you do end up testing 4.2.0, please post the results since I haven’t actually run through all of them yet (I cancelled it).
Re: can it replace the nano on the dev kit board: my undestanding (correct me if i’m wrong, dusty_nv) is that it will work with the later revisions, but not the original. You will need to check the part number to see if it matches your board.
Opencv 4.2.0 with CuDNN is just an exercice, as i am starting to study AI capabilities for my projects.
I will post my feedback but it will need time.
I also think Jetson Nano first generation can’t work with Xavier NX.
If i make good and interesting job with AI and as i work with high resolution videos and picture, maybe i will need more power than Nano can give.
In that case and if Nvidua is interested with my work, i will ask again for Nvidia support.
Still need to work before thinking about more powerful system.
Anyway, if i need more power, i can use my laptop with gtx1060 gpu.
Maybe i said “victory is mine” too quickly.
My system is very unstable and laggy. I can’t use it so i have to flash again my SD card with JetPack 4.3.
I will forget OpenCV 4.2.0 for now.
cmake -D WITH_CUDA=ON -D WITH_CUDNN=ON -D OPENCV_DNN_CUDA=ON -D ENABLE_FAST_MATH=1 -D CUDA_FAST_MATH=1 -DWITH_CUBLAS=1 -D CUDA_ARCH_BIN="7.2" -D CUDA_ARCH_PTX="" -D WITH_GSTREAMER=ON -D WITH_LIBV4L=ON -D BUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_EXAMPLES=ON -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D INSTALL_PYTHON_EXAMPLES=ON -D INSTALL_C_EXAMPLES=OFF -D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib-4.2.0/modules -D BUILD_opencv_python3=yes -D PYTHON_LIBRARY=/usr/lib/x86_64-linux-gnu/libpython3.6m.so -D BUILD_opencv_cudacodec=OFF -D CUDNN_INCLUDE_DIR=/usr/lib/aarch64-linux-gnu/ ..
-- Detected processor: aarch64
-- Looking for ccache - not found
-- Found ZLIB: /usr/lib/aarch64-linux-gnu/libz.so (found suitable version "1.2.11", minimum required is "1.2.3")
-- Could NOT find Jasper (missing: JASPER_LIBRARIES JASPER_INCLUDE_DIR)
-- Found ZLIB: /usr/lib/aarch64-linux-gnu/libz.so (found version "1.2.11")
CMake Error at cmake/FindCUDNN.cmake:68 (file):
file failed to open for reading (No such file or directory):
Call Stack (most recent call first):
-- Could NOT find CUDNN: Found unsuitable version "..", but required is at least "7.5" (found CUDA_cudnn_LIBRARY-NOTFOUND)
how do I check what cudnn version is within the latest l4t container?
$ apt policy libcudnn7
*** 188.8.131.52-1+cuda10.0 500
500 https://repo.download.nvidia.com/jetson/common r32/main
Tab completion works with anything apt, so if that’s not the exact package name just hit tab a few times. Also apt search is useful.
apt policy libcudnn7
N: Unable to locate package libcudnn7
after adding the sources.list.d list file:
root@c74dcc260e4c:~/opencv-4.2.0/build# apt install libcudnn7
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following package was automatically installed and is no longer required:
Use 'apt autoremove' to remove it.
The following NEW packages will be installed:
0 upgraded, 1 newly installed, 0 to remove and 193 not upgraded.
Need to get 182 MB of archives.
After this operation, 447 MB of additional disk space will be used.
Get:1 https://repo.download.nvidia.com/jetson/common r32/main arm64 libcudnn7 arm64 184.108.40.206-1+cuda10.0 [182 MB]
Fetched 182 MB in 1min 11s (2561 kB/s)
debconf: delaying package configuration, since apt-utils is not installed
Selecting previously unselected package libcudnn7.
(Reading database ... 56410 files and directories currently installed.)
Preparing to unpack .../libcudnn7_220.127.116.11-1+cuda10.0_arm64.deb ...
Unpacking libcudnn7 (18.104.22.168-1+cuda10.0) ...
dpkg: error processing archive /var/cache/apt/archives/libcudnn7_22.214.171.124-1+cuda10.0_arm64.deb (--unpack):
unable to make backup link of './usr/lib/aarch64-linux-gnu/libcudnn.so.7.6.3' before installing new version: Invalid cross-device link
dmesg: read kernel buffer failed: Operation not permitted
Errors were encountered while processing:
Try “apt search cudnn”. As to the error, it looks related to the way Nvidia is bind mounting the CUDA resources inside docker (basically inside has to match the outside. Nvidia can probably provide support for that.
Are you running the image on the latest JetPack? If so, looks like this is something Nvidia needs to fix.