L4t-base with cuDNN

Hello,
I would like to install OpenCV 4.5.3 with cuDNN on l4t base docker image. I found that l4t does not have cuDNN pre-installed. How can I install cuDNN into l4t base image?

Best regards,
Naveen

Hi @naveen.crasta, cuDNN is automatically mounted into l4t-base image when you run it with --runtime nvidia. If you want to use OpenCV build with cuDNN enabled, then I recommend using one of the recent l4t-ml containers which have that already installed. For example, if you update your TX2’s JetPack to JetPack 4.6.1, you can use nvcr.io/nvidia/l4t-ml:r32.7.1-py3

1 Like

Hi @dusty_nv,
Following your recommendation, I tried to build OpenCV with cuDNN on nvcr.io/nvidia/l4t-ml:r32.7.1-py3 image with the following CUDA flags (see the bottom of the flag list below).

RUN cmake \
     -D CPACK_BINARY_DEB=ON \
     -D CMAKE_BUILD_TYPE=RELEASE \
     -D CMAKE_INSTALL_PREFIX=$CMAKE_INSTALL \
     -D OPENCV_ENABLE_NONFREE=ON \
     -D ENABLE_CCACHE=ON \
     -D BUILD_JPEG=OFF \
     -D BUILD_JASPER=OFF \
     -D BUILD_JAVA=OFF \
     -D BUILD_opencv_python2=OFF \
     -D BUILD_opencv_python3=ON \
     -D INSTALL_PYTHON_EXAMPLES=OFF \
     -D INSTALL_C_EXAMPLES=OFF \
     -D OPENCV_EXTRA_MODULES_PATH='/OpenCV/opencv_contrib/modules' \
     -D PYTHON_DEFAULT_EXECUTABLE=/usr/bin/python3 \
     -D PYTHON3_EXECUTABLE=/usr/bin/python3.8 \
     -D PYTHON3_INCLUDE_DIR=/usr/include/python3.8 \
     -D PYTHON3_LIBRARY=/usr/lib/$(uname -i)-linux-gnu/libpython3.8.so \
     -D PYTHON3_PACKAGES_PATH=/usr/local/lib/python3.8/dist-packages \
     -D BUILD_EXAMPLES=OFF \
     -D WITH_VTK=OFF \
     -D ENABLE_FAST_MATH=ON \
     -D WITH_LIBV4L=ON \
     -D WITH_GSTREAMER=OFF \
     -D WITH_GSTREAMER_0_10=OFF \
     -D WITH_TBB=ON \
     -D WITH_CUDA=ON \
     -D CUDA_ARCH_BIN=5.3,6.2,7.2,7.5 \
     -D CUDA_ARCH_PTX="" \
     -D CUDA_FAST_MATH=ON \
     -D WITH_CUBLAS=ON \
     -D CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-10.2 \
     -D CUDA_BIN_PATH=/usr/local/cuda-10.2 \
     -D CUDNN_VERSION='10.2' \
     -D WITH_CUDNN=ON \
     -D OPENCV_DNN_CUDA=ON \
     -D CUDNN_INCLUDE_DIR=/usr/include \
     -D CUDNN_LIBRARY=/usr/include \
     ../ && \
     make -j$(nproc) && make install && make package

However, I ended up with a long list of such errors (I have just pasted the first error)!

– Configuring done
CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
CUDA_cublas_LIBRARY (ADVANCED)
linked by target “opencv_cudev” in directory /OpenCV/opencv_contrib/modules/cudev
linked by target “opencv_test_cudev” in directory /OpenCV/opencv_contrib/modules/cudev/test
linked by target “opencv_test_core” in directory /OpenCV/opencv/modules/core
linked by target “opencv_perf_core” in directory /OpenCV/opencv/modules/core
linked by target “opencv_core” in directory /OpenCV/opencv/modules/core
linked by target “opencv_test_cudaarithm” in directory /OpenCV/opencv_contrib/modules/cudaarithm

What could be the reason for this error? Is it not possible to cross compile?

Correction: I have Xavier board but I am trying to cross-compile on x86-64 machine using QEMU with the following command:
docker build -t opencv-qemu --platform linux/arm64 .

Thanks,
Naveen

On JetPack 4.x, in order to use CUDA/cuDNN libraries during docker build operations, you need to set the default docker runtime to nvidia: https://github.com/dusty-nv/jetson-containers#docker-default-runtime

Then these CUDA/cuDNN libraries will get mounted during the docker build. However, this won’t work on emulators, because the libraries aren’t there for them to mount.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.