Cross compile my ROS package for Nvidia jetson TX2

I have some ROS packages developed for a robot which I want to deploy on a jetson TX2 with ROS melodic (the robot has a TX2 for its computer). I want the packages to be compiled on a PC (running Ubuntu 18 presumably) and deploy them on the TX2 to be run on it.

  1. Basically I want the output of catkin_make install for TX2, in my PC so that I can put the install/ directory into the robot from the build machine
  2. I only need to cross compile my own packages. ROS and other dependencies will be pre installed on the TX2

I am an absolute noob to this cross compilation. The most straightforward guide I have found so far is this article for raspberry pi. This other article cross compiles ROS for PX2 but it is not very clear to me. Many other resources are outdated.

Can someone please show me the path I should follow for this? Where do I begin? Which toolchains do I use? What options do I set (catkinm CMake…)? Any insight related to this is much appreciated


What I have found so far (Are these what I need? I have no idea)

  1. GCC toolchain for 64bit BSP
  2. L4T Sources for GCC Tool Chain for 64 bit BSP

I won’t be able to answer anything except a small part of your question, and I’ve not used ROS. This might get you started though.

If you use the cross compiler recommended by NVIDIA for kernel build, then you are probably doing the right thing, though I don’t know if any of the other software might have some release version dependency.

When you cross compile a kernel the kernel is “bare metal”. There are no libraries to link with, and there is no special support for the environment which a regular executable program might expect. Once you build a user space application you must then consider having all of the libraries in place. The libraries themselves will not do anything for you unless you have a linker, and this would be a cross linker since it runs on a desktop PC architecture, but links a foreign (arm64/aarch64) architecture.

The tools used for linking will be considered the “runtime” package when installed to the host PC. The supporting environment of libraries would be called the “sysroot”. These would be the additions above and beyond the cross compiler. I don’t know which release you will be using, but most of this is indirectly from linaro.org, under one of these releases (stick to the “apt” mechanism for install if you can, I am just illustrating):
https://releases.linaro.org/components/toolchain/binaries/

If for example you were to look at release 7.3, running on Linux, with 64-bit ARM architecture (“aarch64”), then you would end up here:
https://releases.linaro.org/components/toolchain/binaries/7.3-2018.05/aarch64-linux-gnu/
…notices that “aarch64” in the name is the architecture, “linux” is the supporting environment, and “gnu” is the surrounding library environment.

Within this the “runtime” is basically the cross linker. The “sysroot” is basically the libraries the linker would use. The “gcc” part is a set of cross tools, e.g., cross compile, or assembler. A kernel only needs the “gcc” part, you will need all three in user space.

I recommend against using the “sysroot” from third parties. There is nothing wrong with that approach, but traditionally that sysroot is a very minimal set of libraries. Every time you needed another library you would end up building it yourself from scratch. If you have a running Jetson, then you already have the most complete sysroot possible.

A clone of the Jetson (loopback mounted), or recursive copy of various directories from the Jetson, will guarantee not only having that content, but also that you are linking against that specific release. The trick is that you would want to put the devel package content on the Jetson itself (you would be interested in header files used for compiling, and not just actual libraries…these are the devel packages).

If you look at any of the Linux PCs (or Jetsons), and examine directories “/lib” and “/usr/lib”, then you’ll end up finding subdirectories named after an architecture. Some content may not have an architecture listed, and this is likely all “native architecture” content. If cross architecture content is installed, then you will also likely find subdirectories named after that non-native content, e.g., “/usr/lib/aarch64-linux-gnu” (notice how this matches the cross tool naming from linaro.org?) or “/lib/aarch64-linux-gnu”. This architecture specific content is what the sysroot would provide, only it is complete and already an exact match for the target system. Recursive copy or symbolic links to a loopback mounted clone is guaranteed perfect.

FYI, if you were to copy files into your host at “/lib/aarch64-linux-gnu/”, then you’d be stuck with that as your dev version. If these were instead symbolic links pointing to where a loopback clone mount exists, then simply remounting a different clone would instantly update your development environment to that release. Remounting a different clone would revert to the other environment. “Clones are your friend”. Clones can also be used to restore if you lose your Jetson.

You can also update your development environment to use paths such as “/usr/local/lib/aarch64-linux-gnu”, and thus not mess with the system path at “/usr/lib”. The “/usr/local” content is intended for things not generally managed via the standard package manager, but would more or less mirror (and supplement) what you see in “/usr”.

You have yet another option someone may be able to help with. You could use QEMU to emulate the full environment. You’d still use something like a clone, but you’d be using not only the libraries and headers of the clone, you’d also be using the binary executables. For example, the linker would be native, and not cross/foreign. QEMU could pretend to be an aarch64 operating system based on your clone. Sorry, I can’t help with the details of that, nor with ROS or its requirements, but this will get you started and someone else can give details for ROS and/or QEMU.

1 Like

Thank you very much @linuxdev for the very detailed explanation. I think this description will be a very good starting point for anyone who want to cross compile; including me.

It seems QEMU does not have a tx2 system yet

Someone else will need to comment on how QEMU can be used, but technically you may not need a full TX2 emulation if libraries and linkers are compatible with the version used on the TX2. I just don’t have enough experience with it to say.

1 Like

Found an article on using QEMU

Another question, How do I use these? Is the toolchain enough or should I use L4T sources as well? And how do I do that?


EDIT: Figured it out. You build the L4T sources to get the GCC toolchain. So I only need the toolchain. Then specify the compiler toolchain in cmake as

SET(COMPILER_ROOT ${MY_BUILD_PATH}/<toolchain directory>/install/)

With the help of @linuxdev and this article, I managed to get catkin build to work in my workspace. Here’s what I have done (using a jetson nano for testing right now)

  1. Copied the ros installation (/opt/ros/melodic/) from nano to my build directory (I will use a clone as mentioned by @linuxdev after figuring everything out )

  2. Copied libpthread and librt library files from nano to <build dir>/usr/lib/aarch64-linux-gnu/. Here’s my build directory structure now EDIT: /usr/lib/aarch64-linux-gnu is now mounted from my nano
    Screenshot from 2020-07-27 00-25-00

  3. Replace absolute paths with my PC paths in ROS files

    find opt/ -type f -exec sed -i 's|/opt/ros/melodic|${CMAKE_CROSS_COMPILE_PREFIX}/opt/ros/melodic|g' {} \;
    find opt/ -type f -exec sed -i 's|/usr/lib/aarch64-linux-gnu|${CMAKE_CROSS_COMPILE_PREFIX}/usr/lib/aarch64-linux-gnu|g' {} \;
    
  4. Replace pthread references as well EDIT: seems not needed anymore
    find opt/ -type f -exec sed -i 's|;pthread;|;${CMAKE_CROSS_COMPILE_PREFIX}/usr/lib/aarch64-linux-gnu/libpthread.so;|g' {} \;

  5. Extract the toolchain gcc-4.8.5-aarch64.tgz into the build directory

  6. Create toolchain.cmake

    SET(CMAKE_SYSTEM_NAME Linux)
    
    SET(CMAKE_C_COMPILER ${NANO_ROOT_PATH}/gcc-4.8.5-aarch64/install/bin/aarch64-unknown-linux-gnu-gcc)
    SET(CMAKE_CXX_COMPILER ${NANO_ROOT_PATH}/gcc-4.8.5-aarch64/install/bin/aarch64-unknown-linux-gnu-g++)
    
    #Below call is necessary to avoid non-RT problem.
    SET(CMAKE_LIBRARY_ARCHITECTURE aarch64-linux-gnu)
    
    SET(NANO_ROOT_PATH ${CMAKE_CURRENT_LIST_DIR})
    SET(NANO_MELODIC_PATH ${NANO_ROOT_PATH}/opt/ros/melodic)
    SET(CMAKE_FIND_ROOT_PATH ${NANO_ROOT_PATH} ${CATKIN_DEVEL_PREFIX})
    
    SET(COMPILER_ROOT ${NANO_ROOT_PATH}/gcc-4.8.5-aarch64/install/) 
    
    #Have to set this one to BOTH, to allow CMake to find rospack
    #This set of variables controls whether the CMAKE_FIND_ROOT_PATH and CMAKE_SYSROOT are used for find_xxx() operations.
    SET(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM BOTH)
    SET(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
    SET(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
    SET(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)
    
    SET(CMAKE_PREFIX_PATH ${NANO_MELODIC_PATH} ${NANO_ROOT_PATH}/usr)
    
    SET(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} --sysroot=${NANO_ROOT_PATH}" CACHE INTERNAL "" FORCE)
    SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} --sysroot=${NANO_ROOT_PATH}" CACHE INTERNAL "" FORCE)
    SET(CMAKE_C_LINK_FLAGS "${CMAKE_C_LINK_FLAGS} --sysroot=${NANO_ROOT_PATH}" CACHE INTERNAL "" FORCE)
    SET(CMAKE_CXX_LINK_FLAGS "${CMAKE_CXX_LINK_FLAGS} --sysroot=${NANO_ROOT_PATH}" CACHE INTERNAL "" FORCE)
    
    SET(LD_LIBRARY_PATH ${NANO_MELODIC_PATH}/lib)
    
    #Skip toolchain trying to build a test program first
    SET (CMAKE_C_COMPILER_WORKS 1)
    SET (CMAKE_CXX_COMPILER_WORKS 1)
    
  7. Create my build script build.sh

    #!/bin/bash
    PWD=$(pwd)
    export LANG=C
    source /opt/ros/melodic/setup.bash
    
    catkin config --extend ${PWD}/opt/ros/melodic/
    
    catkin build -j8 \
    --cmake-args \
    -DCMAKE_TOOLCHAIN_FILE=${PWD}/toolchain.cmake \
    -DCMAKE_CROSS_COMPILE_PREFIX=${PWD} \
    -DRT_LIBRARY=${PWD}/usr/lib/aarch64-linux-gnu/ 
    

Now I can run ./build.sh and without any ROS packages (without anything in src/) it works fine. So I guess everything is setup properly now.

Now when I introduce a simple ROS package into the workspace (src/test_ros_pkg/), I get no stdlib

In file included from <build dir path>/opt/ros/melodic/include/ros/time.h:53:0,
from <build dir path>/opt/ros/melodic/include/ros/ros.h:38,
from <build dir path>/src/test_cpp/src/listener.cpp:1:
<build dir path>/opt/ros/melodic/include/ros/platform.h:37:41: fatal error: stdlib.h: No such file or directory
include <stdlib.h> // getenv, _dupenv_s
^
compilation terminated.

But find -name stdlib.h -type f gives me

./gcc-4.8.5-aarch64/install/aarch64-unknown-linux-gnu/sysroot/usr/include/bits/stdlib.h
./gcc-4.8.5-aarch64/install/aarch64-unknown-linux-gnu/sysroot/usr/include/stdlib.h
./gcc-4.8.5-aarch64/install/aarch64-unknown-linux-gnu/include/c++/4.8.5/tr1/stdlib.h

My question

Why is the compiler unable to find stdlib? How do I show it to the compiler?

Are you compiling with C or C++? I see “.cpp” files, so I assume C++. FYI, standard search paths in C++ differ from C. There are a number of C headers (of which stdlib.h is one) which, if used directly in C++, require being mapped extern 'C'. This is done automatically if you use the C++ version of that header file, or in some cases the C header might have C++ guards, but using the correct header would be important to simplify your life.

C++ adds a number of generic C headers using a modified name. “stdlib.h” becomes “cstdlib”, “errno.h” becomes “cerrno”, “stddef.h” becomes “cstddef”, and so on.

If you cd in your development environment to “/usr/include/c++”, then you will find the C++ version of most files exist under some version number subdirectory within that. Right now I am looking at “/usr/include/c++/7” on a Jetson NX, and here are some of the files here (which are conveniently in the standard search path for headers):

  • cassert
  • ccomplex
  • cctype
  • cerrno
  • cfenv
  • cfloat
  • cinttypes
  • ciso646
  • climits
  • clocale
  • cmath
  • csetjmp
  • csignal
  • cstdalign
  • cstdarg
  • cstdbool
  • cstddef
  • cstdint
  • cstdio
  • cstdlib
  • cstring
  • ctgmath
  • ctime
  • cuchar
  • cwchar
  • cwctype
  • cxxabi.h

All of those files are basically C++ wrappers to the plain C header version. Note that “stdlib.h” is not there, but “cstdlib” is.

Now if you are mixing C code with C++ code, then you may need to modify some source files to use the “.h” version from C, and the “c...” version in C++. Or to correctly wrap the C version with "extern 'C' {} when in C++.

Older systems and compilers did not make things so distinct and the older 4.x compiler may not like the newer “/usr/include/c++/7” or newer releases. The 4.x compiler is used with some bootloader and kernel code, but recent releases should use the 7.x compiler for kernels without issue. User space applications should also work with the 7.x compiler, and so the 4.x compiler may be problematic. There could be both issues at the same time for you…compiler release version issue, plus default C/C++ content being mixed without the correct search path.

I would actually recommend leaving your 4.x in case of a need, but see if you have available a 7.x compiler for user space code, and try that. The correct header files will probably not be in the compiler tool subdirectory itself, but instead within other standard search locations. If you have installed development packages on your TX2 and can see headers in “/usr/include/c++/...”, then those are probably what you want (or at least that subdirectory where you copied it from the Jetson to the host PC build environment).

Thank you for the reply. Your explanations are of great help. The problem here was I didn’t have the right directory included for stdlib.h. Adding it to my toolchain.cmake solved the stdlib.h not found

set(COMPILER_SYSROOT ${COMPILER_ROOT}/aarch64-unknown-linux-gnu/sysroot)
include_directories(BEFORE SYSTEM ${COMPILER_SYSROOT}/usr/include/)

Now I have a different problem. The compiler can’t find Boost

from /home/teshan/xcompile/src/test_cpp/src/talker.cpp:1: /home/teshan/xcompile/opt/ros/melodic/include/ros/time.h:58:50: 
fatal error: boost/math/special_functions/round.hpp: No such file or directory  
#include <boost/math/special_functions/round.hpp>
                                                 ^ 
compilation terminated

I have mounted /usr/include/boost and /usr/lib/aarch64-linux-gnu from my jetson nano (I’m using a nano now for testing). This is my folder structure now

.
├── gcc-4.8.5-aarch64   #<cross compiler>
│   └── install
│       ├── aarch64-unknown-linux-gnu
│        ...
├── opt                 #<ROS source files from nano>
│   └── ros
│       └── melodic
├── src                 #<test c++ code in here>
│   └── test_cpp
│       └── src
└── usr
    ├── include
    │   └── boost       #<nano's /usr/include/boost mounted in here>
    └── lib
        └── aarch64-linux-gnu  #<nano's /usr/lib/aarch64-linux-gnu mounted in here>

I have asked about this in stackoverflow as well but haven’t got a response.

Well, the above got solved by mounting the whole /usr/include instead of /usr/include/boost

.
├── gcc-4.8.5-aarch64   #<cross compiler>
│   └── install
│       ├── aarch64-unknown-linux-gnu
│        ...
├── opt                 #<ROS source files from nano>
│   └── ros
│       └── melodic
├── src                 #<test c++ code in here>
│   └── test_cpp
│       └── src
└── usr
    ├── include         #<nano's /usr/include mounted in here>
    └── lib
        └── aarch64-linux-gnu  #<nano's /usr/lib/aarch64-linux-gnu mounted in here>

Now it cannot find crt1.o, crti.o and libpthread.so.0

/home/teshan/xcompile/gcc-4.8.5-aarch64/install/bin/../lib/gcc/aarch64-unknown-linux-gnu/4.8.5/../../../../aarch64-unknown-linux-gnu/bin/ld: cannot find crt1.o: No such file or directory
/home/teshan/xcompile/gcc-4.8.5-aarch64/install/bin/../lib/gcc/aarch64-unknown-linux-gnu/4.8.5/../../../../aarch64-unknown-linux-gnu/bin/ld: cannot find crti.o: No such file or directory
/home/teshan/xcompile/gcc-4.8.5-aarch64/install/bin/../lib/gcc/aarch64-unknown-linux-gnu/4.8.5/../../../../aarch64-unknown-linux-gnu/bin/ld: cannot find /lib/aarch64-linux-gnu/libpthread.so.0 inside /home/teshan/xcompile

I have crt1.o and crti.o in the mounted directory usr/lib/aarch64-linux-gnu/. So why can’t the compiler find it?

There is no libpthread.so.0 in my nano. But there is a libpthread.so in usr/lib/aarch64-linux-gnu/. What can I do about this?

pthread is just a package which you can install. If it is already installed, then the library would likely be found at:
/lib/aarch64-linux-gnu/
…for example, see “ls /lib/aarch64-linux-gnu/libpthread.so*” or “ldconfig -p | grep pthread”. “ls” simply looks at files, “ldconfig -p” prints the libraries which the default search path (for that linker) sees. Header files would imply needing the “devel” package for pthreads. If you got to linking stage, then it implies you probably already have header files; if failure was earlier, then you probably need devel versions for headers.

See also:
apt search pthread
…or, with a pager since there is so much to see:
apt search pthread | less -i
…then “/pthread” to search once, and “n” or “shift-n” for previous match.

The crt*.o files are somewhat interesting and are the basis of function “main()”, with the specific one somewhat dependent upon whether the language is C or C++. Sometimes in cross compile people will talk about “bare metal” code, versus “user space”. Bare metal does not have any support for many things a C/C++ programmer would otherwise take for granted, e.g., automatic allocation of a local variable has to be done manually in bare metal, and it is user space which makes many of these “standard” features available. The “crt*.o” files are basically the first step in adding those pleasant automated features of user space (versus bare metal), and make the nice behaviors of “main()” possible.

You will normally find the “crt*.o” files in “/usr/lib/aarch64-linux-gnu”. Remember earlier how I mentioned in cross compile tools that “aarch64” was architecture, that “linux” is an indicator of “user space” in Linux (not bare metal), and that “gnu” is who provides the library environment? That combination is why this is under an “aarch64-linux-gnu” location. These should be present on your clone image or whatever you’ve copied from the Jetson for cross compile environment.

1 Like

Thank you for the explanations as always. It’s a great help. I have the crt* files and libpthread files, the problem was compiler not finding them. I thing the error came at the linking stage.

However, I gave up on that and started cross compiling ROS fully and ended up creating this fork of an old repository which was for crosscompiling an older ROS version.

Later I found out that there already is a repo which does what I want (I don’t know how I missed that). It cross compiles your ROS packages with no hassle.

https://github.com/ros-tooling/cross_compile

If no other problems arise from my dependencies, I’ll be using this from here onward