Jetson TK1 - Including OpenCV in Cross-Compile

Hello,

I followed Satish Salian’s guide at http://devblogs.nvidia.com/parallelforall/nvidia-nsight-eclipse-edition-for-jetson-tk1/ for Ubuntu 12.04 x 14.04 cross development and was able to successfully compile/run the boxfilter sample. However, I started running into issues when I tried running a personal project that utilizes OpenCV. I was able to compile and start the program eventually but was presented with a segmentation fault. Here are the steps I took:

  1. Added the following directory to NVCC Compiler -> Includes to point to OpenCV headers:

    /usr/local/include/

  2. Copied OpenCV libs from TK1 to Host:

    sudo scp ubuntu@10.42.0.61:/usr/lib/libopencv* /usr/arm-linux-gnueabihf/lib/

  3. Added the following directory to NVCC Linker -> Libraries -> Library search path (-L):

    /usr/arm-linux-gnueabihf/lib

  4. Added libraries (-l) needed by OpenCV headers (i.e. opencv_core, opencv_imgproc, etc.)
  5. Checked '-fPIC' in NVCC Compiler→Miscellaneous
  6. Checked '-shared' in NVCC Linker→Miscellaneous
  7. Built succesfully

This sequence is a result of addressing individual errors from the IDE as they arose, so I am sure some of these actions are incompatible. Here are a few things I postulate are leading to errors:

  • I am not performing symbol linking after I copy the OpenCV libraries to the host. There are three different .so files for each lib, how do I determine which one to link to? For example, there exists a libopencv_core.so, libopencv_core.so.2.4, and libopencv_core.so.2.4.8.
  • The include directory added in step 1, /usr/local/include, contains headers to the host machine's OpenCV files. These headers are then erroneously linked to the downloaded remote libraries in the added library search path, /usr/arm-linux-gnueabihf/lib. I noticed in this blog post the author mention the libopencv4tegra cmake configuration incorrectly sets the OpenCV include directories: http://namniart.com/jetson-tk1/ubuntu/2014/05/20/ROS-on-Jetson-part2.html
  • Steps 5 and 6 are purely a result of googling/stackexchanging compiler errors, they don't feel right.

TL;DR: What is the proper way to cross-compile with the libopencv4tegra libraries?

Thanks for your time,
Alexander K

I was able to compile/run my program successfully by compiling everything on the board itself using the synchronization method. I will post a guide detailing my procedure soon.

I uploaded my guide for using the synchronization method for TK1 compilation at the following page:

http://visidyn.com/wiki/doku.php?id=jetson_tk1_synchronize_projects

Very nicely done! The Open Source spirit at its best.

Hi Visidyn,

Did you see this issue [url]OpenCV4Tegra 2.4.8.2 on Jetson TK1 - Jetson TK1 - NVIDIA Developer Forums while working with libopencv4tegra ?

Visidyn,

I was wondering when you were getting the segmentation faults if you used cuda-gdb to see what the issue was. I am trying to use openCV for cross compiling and I have done the steps that you have done and everytime I run it I get illegal instruction or segmentation fault.

Does this mean, opencv can not be made to work in cross compilation environment??

It’s certainly possible to cross-compile OpenCV for Jetson TK1 from a desktop, but the tricky part of building OpenCV is always in building the many library dependencies that you’ll want in OpenCV, and if you are cross-compiling OpenCV then you’ll need to cross-compile all those dependencies too otherwise you’ll experience crashes.

It sounds like your OpenCV-based program compiled fine but your dependencies probably weren’t cross-compiled correctly and so the instant you try to load an image or video from a file or access a camera or display an image, your program crashes.

So I definitely recommend that you stick with native compilation onboard the Jetson TK1 since it is quite fast anyway (if it’s using all 4 cores such as by running “make -j4” instead of just “make”), or even better still is to use the prebuilt OpenCV4Tegra library since it also contains a large number of multi-core SIMD-optimized CPU code that you won’t have if you build OpenCV yourself.

If for some reason you really need to cross-compile OpenCV, look into how to cross-compile FFMPEG for ARM, since FFMPEG is usually the most complex part, and if you can cross-compile FFMPEG then you can figure out how to cross-compile the other dependencies!

Ok I decided to develop locally on my Tegra Board. I would like to install NSight on my ARM system and develop locally. Where can I download NSight Eclipse edition for ARM local development?

Doesn’t look like that’s possible, NSight is for x86 cross-compiling only: [url]http://devblogs.nvidia.com/parallelforall/nvidia-nsight-eclipse-edition-for-jetson-tk1/[/url]

I believe you would need to compile manually.

Hi All,

I want to resume this topic. so what is the final solution?

Suppose my host Ubuntu didn’t install OPENCV, how to setup Nsight and remote build on target system? From the above answers, either copy all lib dependency from target system to host then cross-compiled or use sync projects?

Hi,

First, please upgrade your cross-compiler to 5.x
https://releases.linaro.org/components/toolchain/binaries/latest-5/aarch64-linux-gnu/

Then, follow this link to cross compile your project:
[url]How to use OpenCV3.1 with NVIDIA Nsight? - Jetson TX1 - NVIDIA Developer Forums

Hi,

I saw some deltas by installing OPENCV 3.2 on my target system. By typing “pkg-config --cflags --libs opencv”

On my target board is:

-I/usr/include/opencv -lopencv_cudabgsegm -lopencv_cudaobjdetect -lopencv_cudastereo -lopencv_shape -lopencv_stitching -lopencv_cudafeatures2d -lopencv_superres -lopencv_cudacodec -lopencv_videostab -lopencv_cudaoptflow -lopencv_cudalegacy -lopencv_calib3d -lopencv_features2d -lopencv_objdetect -lopencv_highgui -lopencv_videoio -lopencv_photo -lopencv_imgcodecs -lopencv_cudawarping -lopencv_cudaimgproc -lopencv_cudafilters -lopencv_video -lopencv_ml -lopencv_imgproc -lopencv_flann -lopencv_cudaarithm -lopencv_core -lopencv_cudev

On host (Ubuntu 16.04) is:
-I/usr/include/opencv -L/usr/local/cuda-8.0/lib64 -lopencv_calib3d -lopencv_contrib -lopencv_core -lopencv_features2d -lopencv_flann -lopencv_gpu -lopencv_highgui -lopencv_imgproc -lopencv_legacy -lopencv_ml -lopencv_objdetect -lopencv_photo -lopencv_stitching -lopencv_superres -lopencv_ts -lopencv_video -lopencv_videostab -lopencv_detection_based_tracker -lopencv_esm_panorama -lopencv_facedetect -lopencv_imuvstab -lopencv_tegra -lopencv_vstab -lcufft -lnpps -lnppi -lnppc -lcudart -latomic -ltbb -lrt -lpthread -lm -ldl

Do you know how to config this?

Thanks

It looks that on your board, pkconfig returns flags and libs from opencv-3.2 installed in /usr, while on your cross compile host it returns opencv4tegra-2.4 flags and libs installed in /usr.
Check which version is installed in each (libopencv_gpu is typical of opencv2, while libopencv_cuda* came with opencv3), and if each location has opencv3.2, it may be an issue with pkg-config. Then you may also look for /usr/lib/pkgconfig/opencv.pc and see what it tells.

You may also consider having your jetson rootfs rsync’d with your host L4T rootfs for avoiding such discrepancies.

yeah, you’re right.

I don’t why my host had opencv4tegra installed, maybe at stage of installing JetPack 3.0?

Anyway, I’m having problem to cross-compile project with OPENCV. I followed up the above link tutorial.

1.Create Nsight project
File → New → CUDA C/C++ Project
Empty Project → Generate PTX code=5.3, Generate GPU code=5.3 → CPU Architecture=AArch64

  1. Add configuration (right click → properties) <=
    Build → Settings → Tool Settings
    NVCC Compiler → Includes → Include paths → Add “usr/include/opencv” <= Opencv on target system
    NVCC Linker → Libraries → Libraries → Add “opencv_core”, “opencv_highgui”
    NVCC Linker → Libraries → Library search path → Add “usr/lib” <= Opencv on target system

When I hit hammer, errors like this:

uilding file: …/src/main.cu
Invoking: NVCC Compiler
/usr/local/cuda-8.0/bin/nvcc -I/usr/include/opencv -G -g -O0 -gencode arch=compute_53,code=sm_53 -odir “src” -M -o “src/main.d” “…/src/main.cu”
/usr/local/cuda-8.0/bin/nvcc -I/usr/include/opencv -G -g -O0 --compile --relocatable-device-code=false -gencode arch=compute_53,code=compute_53 -gencode arch=compute_53,code=sm_53 -x cu -o “src/main.o” “…/src/main.cu”
Finished building: …/src/main.cu

Building target: Image_Resize
Invoking: NVCC Linker
/usr/local/cuda-8.0/bin/nvcc --cudart static -L/usr/lib --relocatable-device-code=false -gencode arch=compute_53,code=compute_53 -gencode arch=compute_53,code=sm_53 -link -o “Image_Resize” ./src/main.o -lopencv_highgui -lopencv_core -lopencv_imgproc -lopencv_cudawarping
/usr/bin/ld: skipping incompatible /usr/lib/libopencv_highgui.so when searching for -lopencv_highgui
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-gnu/5/…/…/…/…/lib/libopencv_highgui.so when searching for -lopencv_highgui
/usr/bin/ld: skipping incompatible /usr/lib/…/lib/libopencv_highgui.so when searching for -lopencv_highgui
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-gnu/5/…/…/…/libopencv_highgui.so when searching for -lopencv_highgui
/usr/bin/ld: skipping incompatible //usr/lib/libopencv_highgui.so when searching for -lopencv_highgui
/usr/bin/ld: cannot find -lopencv_highgui
/usr/bin/ld: skipping incompatible /usr/lib/libopencv_core.so when searching for -lopencv_core
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-gnu/5/…/…/…/…/lib/libopencv_core.so when searching for -lopencv_core
/usr/bin/ld: skipping incompatible /usr/lib/…/lib/libopencv_core.so when searching for -lopencv_core
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-gnu/5/…/…/…/libopencv_core.so when searching for -lopencv_core
/usr/bin/ld: skipping incompatible //usr/lib/libopencv_core.so when searching for -lopencv_core
/usr/bin/ld: cannot find -lopencv_core

You are configuring for a TX1:

Empty Project -> Generate PTX code=5.3, Generate GPU code=5.3 -> CPU Architecture=AArch64

Are you using a TK1 ? For TK1, it should be something like :

Empty Project -> Generate PTX code=3.2, Generate GPU code=3.2 -> CPU Architecture=armhf

Do you know which compiler produced your opencv library ? If it is opencv4tegra, it might be with gcc4, so I would advise to cross compile with gcc4 for armh. If you’ve built your own opencv library, you should use the same version for armhf.

Hi,

Please copy OpenCV libraries back to the host and add the folder information into the Nsight.

For cross-compiler, Ubuntu 14.04 uses gcc 4.8. (5.4 is for Ubuntu 16.04)
Sorry for the misleading.