I followed Satish Salian’s guide at http://devblogs.nvidia.com/parallelforall/nvidia-nsight-eclipse-edition-for-jetson-tk1/ for Ubuntu 12.04 x 14.04 cross development and was able to successfully compile/run the boxfilter sample. However, I started running into issues when I tried running a personal project that utilizes OpenCV. I was able to compile and start the program eventually but was presented with a segmentation fault. Here are the steps I took:
Added the following directory to NVCC Compiler -> Includes to point to OpenCV headers:
This sequence is a result of addressing individual errors from the IDE as they arose, so I am sure some of these actions are incompatible. Here are a few things I postulate are leading to errors:
I am not performing symbol linking after I copy the OpenCV libraries to the host. There are three different .so files for each lib, how do I determine which one to link to? For example, there exists a libopencv_core.so, libopencv_core.so.2.4, and libopencv_core.so.2.4.8.
The include directory added in step 1, /usr/local/include, contains headers to the host machine's OpenCV files. These headers are then erroneously linked to the downloaded remote libraries in the added library search path, /usr/arm-linux-gnueabihf/lib. I noticed in this blog post the author mention the libopencv4tegra cmake configuration incorrectly sets the OpenCV include directories:
http://namniart.com/jetson-tk1/ubuntu/2014/05/20/ROS-on-Jetson-part2.html
Steps 5 and 6 are purely a result of googling/stackexchanging compiler errors, they don't feel right.
TL;DR: What is the proper way to cross-compile with the libopencv4tegra libraries?
I was able to compile/run my program successfully by compiling everything on the board itself using the synchronization method. I will post a guide detailing my procedure soon.
I was wondering when you were getting the segmentation faults if you used cuda-gdb to see what the issue was. I am trying to use openCV for cross compiling and I have done the steps that you have done and everytime I run it I get illegal instruction or segmentation fault.
It’s certainly possible to cross-compile OpenCV for Jetson TK1 from a desktop, but the tricky part of building OpenCV is always in building the many library dependencies that you’ll want in OpenCV, and if you are cross-compiling OpenCV then you’ll need to cross-compile all those dependencies too otherwise you’ll experience crashes.
It sounds like your OpenCV-based program compiled fine but your dependencies probably weren’t cross-compiled correctly and so the instant you try to load an image or video from a file or access a camera or display an image, your program crashes.
So I definitely recommend that you stick with native compilation onboard the Jetson TK1 since it is quite fast anyway (if it’s using all 4 cores such as by running “make -j4” instead of just “make”), or even better still is to use the prebuilt OpenCV4Tegra library since it also contains a large number of multi-core SIMD-optimized CPU code that you won’t have if you build OpenCV yourself.
If for some reason you really need to cross-compile OpenCV, look into how to cross-compile FFMPEG for ARM, since FFMPEG is usually the most complex part, and if you can cross-compile FFMPEG then you can figure out how to cross-compile the other dependencies!
Ok I decided to develop locally on my Tegra Board. I would like to install NSight on my ARM system and develop locally. Where can I download NSight Eclipse edition for ARM local development?
I want to resume this topic. so what is the final solution?
Suppose my host Ubuntu didn’t install OPENCV, how to setup Nsight and remote build on target system? From the above answers, either copy all lib dependency from target system to host then cross-compiled or use sync projects?
It looks that on your board, pkconfig returns flags and libs from opencv-3.2 installed in /usr, while on your cross compile host it returns opencv4tegra-2.4 flags and libs installed in /usr.
Check which version is installed in each (libopencv_gpu is typical of opencv2, while libopencv_cuda* came with opencv3), and if each location has opencv3.2, it may be an issue with pkg-config. Then you may also look for /usr/lib/pkgconfig/opencv.pc and see what it tells.
You may also consider having your jetson rootfs rsync’d with your host L4T rootfs for avoiding such discrepancies.
Do you know which compiler produced your opencv library ? If it is opencv4tegra, it might be with gcc4, so I would advise to cross compile with gcc4 for armh. If you’ve built your own opencv library, you should use the same version for armhf.