Jetson docker image opencv

Hi,
I have a problem in extending the base docker image , I need to install the OpenCV library in the container I do this by following the AastaNV/JEP git but I have an error when I do the build of the Dockerfile to create the image, this TOPIC was about something similar to mine but I didn’t understand the solution.

below is my Dockerfile and the error:
Dockerfile:

FROM nvcr.io/nvidia/l4t-base:r32.4.4

ENV DEBIAN_FRONTEND=noninteractive

WORKDIR /usr/src/app
COPY ./install_opencv4.5.0_Jetson.sh .

RUN apt-get update && \
    apt-get install -y --no-install-recommends git \
    apt-utils\
    g++\
    make\
    cmake\
    unzip\
    wget
    
RUN chmod +x install_opencv4.5.0_Jetson.sh && ./install_opencv4.5.0_Jetson.sh

Error from building image:

 CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
 Please set them or make sure they are set and tested correctly in the CMake files:
 CUDA_cublas_LIBRARY (ADVANCED)
     linked by target "opencv_cudev" in directory /usr/src/app/workspace/opencv_contrib-4.5.0/modules/cudev
     linked by target "opencv_core" in directory /usr/src/app/workspace/opencv-4.5.0/modules/core
     linked by target "opencv_cudaarithm" in directory /usr/src/app/workspace/opencv_contrib-4.5.0/modules/cudaarithm
     linked by target "opencv_cudaarithm" in directory /usr/src/app/workspace/opencv_contrib-4.5.0/modules/cudaarithm
     linked by target "opencv_flann" in directory /usr/src/app/workspace/opencv-4.5.0/modules/flann

the error is long I uploaded the file to put it all, the one above is just the beginning
error.txt (185.2 KB)

With nvidia docker runtime it should load the cuda libraries and more automatically but I can not compile in the image build phase, while if I do it inside the running container I can install everything, but I need to create an image.
Thanks

Hi,

That’s because the Jetson container doesn’t really include CUDA libraries but mount it from the host at the runtime.
Since the mounting is runtime-only, there is no CUDA library available at docker build.

Below are two possible workarounds for this issue:
1. Copy the CUDA toolkit into the container and delete it once the OpenCV is built.

2. You can also add a csv file to mount the host OpenCV library from Jetson directly.

/etc/nvidia-container-runtime/host-files-for-container.d

More, our user has successfully created an OpenCV Dockerfile based on l4t-base and sharing on the forum.
You can check his post to get some idea first:

Thanks.

I am following the dockerfile of the user who proposed the solution by not touching anything in the file and running it gives me this error and then continues:

> Sending build context to Docker daemon  10.24kB
> [WARNING]: Empty continuation line found in:
>     RUN apt-get update && apt-get install -y --no-install-recommends          build-essential          ca-certificates          curl          ffmpeg          git          wget          unzip          python3-dev          python3-pip           software-properties-common &&      add-apt-repository -y ppa:openjdk-r/ppa &&      apt-get update && apt-get install -y openjdk-8-jdk  libssl-dev &&      apt-get clean &&      apt-get purge cmake &&  apt autoremove && wget https://github.com/Kitware/CMake/releases/download/v3.18.4/cmake-3.18.4.tar.gz && tar -zxvf cmake-3.18.4.tar.gz && cd cmake-3.18.4 &&  ./bootstrap && make -j8 && make install &&     rm -rf /var/lib/apt/lists/*
> [WARNING]: Empty continuation lines will become errors in a future release.

i will update you if this work.

I succeeded with the solution of @Andrey1984

but I have some questions:

  1. the image goes from 600 Mb to 7.6 Gb

  2. In the Dockerfile I upload the folder cuda-10.2 because then I install other cuda packages with apt install

  3. If I use CSV is loaded the OpenCV library compiled and installed by the host but still can not be used during the docker build?

  4. I will then need to use the VisionWorks and openCV libraries to compile my project in the docker image, so do I also need to upload the VisionWorks folder?

You can set the default docker runtime to nvidia, and then CUDA/cuDNN/VisionWorks/ect will be available to you during docker build operations. See here: https://github.com/dusty-nv/jetson-containers#docker-default-runtime

This should enable OpenCV to find the cuDNN library it is looking for in your Dockerfile. You shouldn’t need to copy CUDA into your container or modify the CSV files, just change the docker daemon’s default-runtime. Users of your container won’t need to change their default runtime, they will just need to run the container with --runtime nvidia. The daemon.json change is needed because you can’t specify the --runtime argument to docker build.

2 Likes

just to be clear … on the board where I develop the docker image with docker build I’m going to edit the docker daemon’s file so as to have the libraries during the build phase?

while on other boards just launch the container with my image with the parameter --runtime nvidia?

It’s works!
but i get this error:

> /usr/bin/ld: /usr/share/visionworks/sources/3rdparty/glfw3/libs/libglfw3.a(x11_clipboard.c.o): undefined reference to symbol 'XConvertSelection'
> //usr/lib/aarch64-linux-gnu/libX11.so.6: error adding symbols: DSO missing from command line
> collect2: error: ld returned 1 exit status
> CMakeFiles/main.dir/build.make:188: recipe for target 'main' failed
> make[2]: *** [main] Error 1
> CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/main.dir/all' failed
> make[1]: *** [CMakeFiles/main.dir/all] Error 2
> Makefile:83: recipe for target 'all' failed
> make: *** [all] Error 2

it seems a package or PATH/ linker is missed
ref: https://stackoverflow.com/questions/24989432/linking-error-dso-missing-from-command-line

yep I solved the problem of the link of the libraries now I have a problem on the output between docker and host when I use VisionWorks library ovxio, I opened a new TOPIC

1 Like

updated script

also available as deployment:

xhost +
 docker run -it --rm --net=host --runtime nvidia -e DISPLAY=$DISPLAY --privileged --ipc=host -v /tmp/.X11-unix/:/tmp/.X11-unix/ -v /tmp/argus_socket:/tmp/argus_socket --cap-add SYS_PTRACE iad.ocir.io/idso6d7wodhe/jetson_nx/opencv541
cp -r /usr/local/opencv-4.5.1-dev/lib/python3.6/dist-packages/cv2 /usr/lib/python3.6/dist-packages/cv2

export OPENCV_VERSION=opencv-4.5.1-dev
export LD_LIBRARY_PATH=/usr/local/$OPENCV_VERSION/lib
#then creating a file to build with the command below
g++ -std=c++11 -Wall -I/usr/local/$OPENCV_VERSION/include/opencv4 -I/usr/local/cuda/targets/aarch64-linux/include  simple_video.cpp -L/usr/local/$OPENCV_VERSION/lib -lopencv_core -lopencv_imgproc -lopencv_video -lopencv_videoio -lopencv_highgui -o simple_video
1 Like