Cross Compile on x86 for Jetson Xavier Dockerfile with opencv cuda support

Hey guys,
my goal is it to crosscompile an docker image on x86 with with opencv cuda support.

At first i have used this manual from the nvidia dockerhub: NVIDIA Container Runtime on Jetson · NVIDIA/nvidia-docker Wiki · GitHub

My Systems: Server with RTX 3090 and Ubuntu 20.04 LTS, Jetson Xavier NX with jetpack 4.5

So the first Point is, that i can build the Dockerfile on the Jetson with this manual and it prints me out my systems informations in the end.
In the next step i have build the dockerfile with the nvidia base image 32.5.0 with opencv cuda support on the jetson. It works too.

So now i like do do this things on my x86 workstation, but however it fails. I have done every single step from the nvidia manual for cross compiling. i have tryed at on ubuntu 18.04, 20.04 with the cuda 11.6 driver for my graphic card on the host system. I have also tryed to install cuda 10.2 on my host system, but this fails. Also i have installed the nvidia sdk manager to get the right dependencies for Jetpack 4.5.
I haved installed navidia runtimes and edit the deamon.jason on the host, to used runtime in the build prcocess like on the Xavier NX.
And i can build the docker container on the host, but only without cuda support for opencv or something other.
EVERY TIME i have copyed the smaples form my xavier into the docker container, like the manual its says.
When i go into the docker and execute the ./devicequery, than it fail on the host. So if this base dockerfile from the manual doesent work, it also doesnt will work to build docker image with opencv cuda.

Maybe someone can help my. My goal is it to deploy applications on host x86 for jetson Xavier. The build image should be load into a docker hub and than i can mange over Open Horizon to deploy them on different Jetson Xavier NX.

And here are my Dockerfiles:
From the Manual:

FROM nvcr.io/nvidia/l4t-base:r32.5.0

RUN apt-get update && apt-get install -y --no-install-recommends make g++
COPY ./samples /tmp/samples

WORKDIR /tmp/samples/1_Utilities/deviceQuery
RUN make clean && make

CMD ["./deviceQuery"]

And my own Dockerfile, which works on the Jetso Xavier, but not on my host system:

FROM nvcr.io/nvidia/l4t-base:r32.5.0

RUN apt-get update && apt-get install -y --no-install-recommends make g++
COPY ./samples /tmp/samples

WORKDIR /tmp/samples/1_Utilities/deviceQuery
RUN make clean && make

WORKDIR /

RUN apt-get install -y build-essential cmake git unzip pkg-config zlib1g-dev && \
apt-get install -y libjpeg-dev libjpeg8-dev libjpeg-turbo8-dev libpng-dev libtiff-dev && \
apt-get install -y libavcodec-dev libavformat-dev libswscale-dev libglew-dev && \
apt-get install -y libgtk2.0-dev libgtk-3-dev libcanberra-gtk* && \
apt-get install -y python-dev python-numpy python-pip && \
apt-get install -y python3-dev python3-numpy python3-pip && \
apt-get install -y libxvidcore-dev libx264-dev libgtk-3-dev && \
apt-get install -y libtbb2 libtbb-dev libdc1394-22-dev libxine2-dev && \
apt-get install -y gstreamer1.0-tools libv4l-dev v4l-utils v4l2ucp  qv4l2 && \
apt-get install -y libgstreamer-plugins-base1.0-dev libgstreamer-plugins-good1.0-dev && \
apt-get install -y libavresample-dev libvorbis-dev libxine2-dev libtesseract-dev && \
apt-get install -y libfaac-dev libmp3lame-dev libtheora-dev libpostproc-dev && \
apt-get install -y libopencore-amrnb-dev libopencore-amrwb-dev && \
apt-get install -y libopenblas-dev libatlas-base-dev libblas-dev && \
apt-get install -y liblapack-dev liblapacke-dev libeigen3-dev gfortran && \
apt-get install -y libhdf5-dev protobuf-compiler && \
apt-get install -y libprotobuf-dev libgoogle-glog-dev libgflags-dev && \
apt-get install -y libopenmpi-dev

RUN wget -O opencv.zip https://github.com/opencv/opencv/archive/4.5.5.zip  && \
wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/4.5.5.zip && \
unzip opencv.zip && \
unzip opencv_contrib.zip && \
mv opencv-4.5.5 opencv && \
mv opencv_contrib-4.5.5 opencv_contrib && \
rm opencv.zip && \
rm opencv_contrib.zip

RUN cd /opencv && mkdir build -p && cd build && \
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr \
-D OPENCV_EXTRA_MODULES_PATH=/opencv_contrib/modules \ 
-D EIGEN_INCLUDE_PATH=/usr/include/eigen3 \
-D WITH_OPENCL=OFF \
-D WITH_CUDA=ON \
-D CUDA_ARCH_BIN=5.3 \
-D CUDA_ARCH_PTX="" \
-D WITH_CUDNN=ON \
-D WITH_CUBLAS=ON \
-D ENABLE_FAST_MATH=ON \
-D CUDA_FAST_MATH=ON \
-D OPENCV_DNN_CUDA=ON \
-D ENABLE_NEON=ON \
-D WITH_QT=OFF \
-D WITH_OPENMP=ON \
-D BUILD_TIFF=ON \
-D WITH_FFMPEG=ON \
-D WITH_GSTREAMER=ON \
-D WITH_TBB=ON \
-D BUILD_TBB=ON \
-D BUILD_TESTS=OFF \
-D WITH_EIGEN=ON \
-D WITH_V4L=ON \
-D WITH_LIBV4L=ON \
-D OPENCV_ENABLE_NONFREE=ON \
-D INSTALL_C_EXAMPLES=OFF \
-D INSTALL_PYTHON_EXAMPLES=OFF \
-D PYTHON3_PACKAGES_PATH=/usr/lib/python3/dist-packages \
-D OPENCV_GENERATE_PKGCONFIG=ON \
-D BUILD_EXAMPLES=OFF ..


RUN cd /opencv/build && make -j6 

#RUN sudo rm -r /usr/include/opencv4/opencv2
RUN cd /opencv/build && make install
RUN ldconfig

RUN cd /opencv/build && make clean
RUN apt-get update
RUN rm -r opencv/
RUN rm -r opencv_contrib/

Thank you very much!

Hi,

Would you mind sharing the output log with us as well?

Please note that qemu is used to simulate the ARM environment on a host machine.
Have you set up the qemu with the steps shared below?

Thanks.

1 Like

Hi,
thank you for the answer.

Today i have installed an blank ubuntu 18.04 to make it step by step new.
So have have installed the Jetpack 4.5 on the host with the nvidia sdk manager.
Then i have installed docker 5:20.10.13-0~ubuntu-bionic.
After this a followed the instruction on your link here:

$ sudo apt-get install qemu binfmt-support qemu-user-static

# Check if the entries look good.
$ sudo cat /proc/sys/fs/binfmt_misc/status
enabled

# See if /usr/bin/qemu-aarch64-static exists as one of the interpreters.
$ cat /proc/sys/fs/binfmt_misc/qemu-aarch64
enabled
interpreter /usr/bin/qemu-aarch64-static
flags: OCF
offset 0
magic 7f454c460201010000000000000000000200b700
mask ffffffffffffff00fffffffffffffffffeffffff

But my flag shows only OC and not OCF, i think this is the first problem.
What sould i do to get OCF?
Actually i havent installed the nvidia 510 driver for my RTX 3090. Should i install this?, because this use cuda 11.6 and not 10.2

The pictures shows the Nvdia sdk manger installation and the qumu output


Hey,
now i have fix this OCF Problem, but now i want to start the docker container on the x86 host to test some things. The Prblem is, that it shows me after run this:

What should i do that i can use nvidia runtime?
Aktuelly i have installed this things on my blank ubuntu 18.04:
Jetpack 4.5 for host
docker 5:20.10.13-0~ubuntu-bionic
quemu

  • i havent installed an nvidia driver and cuda for my rtx 3090 (should i do this with 470+ cuda 10.2?)

@AastaLLL ?

Hi,
@maximilian.benkert , i have seen, that you have the same Problem like me. You have solve this with this solutuion: “In order to test (and run) my software, I use an x_86 docker image that contains the same versions of CUDA, TensorRT, PyTorch, etc. as the Jetson. Usually the two systems are compatible. Sometimes I had to customize something on the Jetson. Nevertheless, this approach allows me to test the software in general.”

So is that right, that you have compiled an jetson docker in an x86 docker image? Maybe you can explain some more details.

Best reguards

Hi @carl.hering.king,
Yes, sure. I can explain our approach in more detail:

First, we distinguish between

  • a setup for compiling the resources for Jetson and

  • a setup for testing on a workstation (like your server).

About the setup for compiling:

We use QEMU to emulate an ARM architecture on the workstation. We have two docker images that simulates the jetson. One is nvcr.io/nvidia/l4t-base and the other is a copy of our jetson. They enable us to (cross-)compile our source code (mostly c++), but we can’t run the source code on the server.

About the setup for testing

We also have a docker image with the exact cuda version, and other nvidia tools. It is a normal x_86 image, so it runs on the workstation. Since the nvidia tools work similar on both architectures, this approach enables us to test the source code in general. However, there are obvious limitations (e.g., when using GStreamer we do not have the same encoder/decoder).

Hi @maximilian.benkert,
thank you for your answer. I have done my compilling with this manuals, which you have send.
I can now compile programms for the jetson xavier. But the problem which i have is, that i cant build opencv with cuda support. I have set the nvidia runtime in my deamon.json like this manual: (docker build with nvidia runtime - Stack Overflow) on my x86)

So when i build opencv without cuda it works fine, but when i build it with cuda it doesent work, because docker cant find cudnn. Here is my error output: Could NOT find CUDNN(missing: CUDNN_LIBARY CUDNN_INCLUDE_DIR)

And Here is my Dockerfile:

FROM nvcr.io/nvidia/l4t-base:r32.5.0

RUN apt-get update && apt-get install -y --no-install-recommends make g++
COPY ./samples /tmp/samples

WORKDIR /tmp/samples/1_Utilities/deviceQuery
RUN make clean && make

WORKDIR /

RUN apt-get install -y build-essential cmake git unzip pkg-config zlib1g-dev && \
apt-get install -y libjpeg-dev libjpeg8-dev libjpeg-turbo8-dev libpng-dev libtiff-dev && \
apt-get install -y libavcodec-dev libavformat-dev libswscale-dev libglew-dev && \
apt-get install -y libgtk2.0-dev libgtk-3-dev libcanberra-gtk* && \
apt-get install -y python-dev python-numpy python-pip && \
apt-get install -y python3-dev python3-numpy python3-pip && \
apt-get install -y libxvidcore-dev libx264-dev libgtk-3-dev && \
apt-get install -y libtbb2 libtbb-dev libdc1394-22-dev libxine2-dev && \
apt-get install -y gstreamer1.0-tools libv4l-dev v4l-utils v4l2ucp  qv4l2 && \
apt-get install -y libgstreamer-plugins-base1.0-dev libgstreamer-plugins-good1.0-dev && \
apt-get install -y libavresample-dev libvorbis-dev libxine2-dev libtesseract-dev && \
apt-get install -y libfaac-dev libmp3lame-dev libtheora-dev libpostproc-dev && \
apt-get install -y libopencore-amrnb-dev libopencore-amrwb-dev && \
apt-get install -y libopenblas-dev libatlas-base-dev libblas-dev && \
apt-get install -y liblapack-dev liblapacke-dev libeigen3-dev gfortran && \
apt-get install -y libhdf5-dev protobuf-compiler && \
apt-get install -y libprotobuf-dev libgoogle-glog-dev libgflags-dev && \
apt-get install -y libopenmpi-dev

RUN wget -O opencv.zip https://github.com/opencv/opencv/archive/4.5.5.zip  && \
wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/4.5.5.zip && \
unzip opencv.zip && \
unzip opencv_contrib.zip && \
mv opencv-4.5.5 opencv && \
mv opencv_contrib-4.5.5 opencv_contrib && \
rm opencv.zip && \
rm opencv_contrib.zip

RUN cd /opencv && mkdir build -p && cd build && \
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr \
-D OPENCV_EXTRA_MODULES_PATH=/opencv_contrib/modules \ 
-D EIGEN_INCLUDE_PATH=/usr/include/eigen3 \
-D WITH_OPENCL=OFF \
-D WITH_CUDA=ON \
-D CUDA_ARCH_BIN=5.3 \
-D CUDA_ARCH_PTX="" \
-D WITH_CUDNN=ON \
-D WITH_CUBLAS=ON \
-D ENABLE_FAST_MATH=ON \
-D CUDA_FAST_MATH=ON \
-D OPENCV_DNN_CUDA=ON \
-D ENABLE_NEON=ON \
-D WITH_QT=OFF \
-D WITH_OPENMP=ON \
-D BUILD_TIFF=ON \
-D WITH_FFMPEG=ON \
-D WITH_GSTREAMER=ON \
-D WITH_TBB=ON \
-D BUILD_TBB=ON \
-D BUILD_TESTS=OFF \
-D WITH_EIGEN=ON \
-D WITH_V4L=ON \
-D WITH_LIBV4L=ON \
-D OPENCV_ENABLE_NONFREE=ON \
-D INSTALL_C_EXAMPLES=OFF \
-D INSTALL_PYTHON_EXAMPLES=OFF \
-D PYTHON3_PACKAGES_PATH=/usr/lib/python3/dist-packages \
-D OPENCV_GENERATE_PKGCONFIG=ON \
-D BUILD_EXAMPLES=OFF ..


RUN cd /opencv/build && make -j6 

#RUN sudo rm -r /usr/include/opencv4/opencv2
RUN cd /opencv/build && make install
RUN ldconfig

RUN cd /opencv/build && make clean
RUN apt-get update
RUN rm -r opencv/
RUN rm -r opencv_contrib/

Hi @carl.hering.king,

OpenCV is already installed using the docker image, isn’t it?

In addition, if you need more specific features, you can also find additional NVIDIA Jetson docker images like one including PyTorch, maybe also one including CUDNN if this not part of the base image.

Here is the link, I have in my notes: Your First Jetson Container | NVIDIA Developer

If you really need all features, the Jetson is providing have a look at the second link I sent. It explains how you can clone your jetson so that all features (like CUDNN) are part of your image.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.