Toolchain (cross-compiler) of Jetson TX2

Hi, I need an external infrared camera on the TX2, and now the seller needs me to provide a toolchain(cross-compiler) to them. This is what they required:

  • If the board supports LIBUSB, please provide us the Toolchain (cross-compiler) of that board.
  • If the board does not support LIBUSB, we need both Toolchain and Kernel source. The Kernel source should be complied in advance. You can contact the board manufacturer to obtain Toolchain and Kernel source.

I have searched some materials, but I’m still puzzled over what Toolchain is. So what is the Toolchain and kernel source of Jetson TX2?
Thanks for any advice.

Hi tt45,

You can find the source or pre-built cross-compiling toolchain that runs on an x86_64 Linux host PC from Jetson Download Center:
https://developer.nvidia.com/embedded/downloads

Thanks

It’s extremely large, but if you really want to you can offer them a clone image of the entire root partition which can be loopback mounted. This guarantees compatibility with every component (tool chain is separate, but you could for example install libusb dev files before cloning and they’d know everything is there).

Hi kayccc,

Thanks for your help.

I provided the cameras seller with the link https://developer.nvidia.com/embedded/downloads. However, they were confused about the correct version I am using. The following is the reply I received

“The provided link contains several download links, so could you please provide the exact download link of the toolchain for your current device.

You can ask the manufacture about LIBUSB supported issue. Only manufacturer can know it well. If the device supported LIBUSB, there is no Driver required to be connected with IriShield.
We need this information to choose the correct SDK source code for compiling. If the device does not support LIBUSB, please provide us the link to download Kernel source as well
(The provided link contains several download links for Kenel source as well, please point the correct version you are using).”

I am using Jetson TX2 with ubuntu16.04 OS and JetPack3.1. I am wondering whether Jetson TX2 supports LIBUSB or not? what the correct version of toolchain I should use, is “GCC Tool Chain Sources for 64-bit BSP” and “GCC Tool Chain for 64-bit BSP” of version 28.2DP or others?

Thanks.

They will need to log in first (which means they have to register…but it doesn’t cost anything and won’t result in spam).

Have them put “tool” in the search box, and “L4T-TX2” in software, “build” in tools. Then pick from “version”…most likely you want the R28.1 version (this implies the Jetson you build for was flashed with R28.1). They can use “sources” if they want to compile their own chain…it’s very unlikely they want to do that.

Note that a tool chain is only the tools for compiling…it doesn’t contain a sysroot…basically the linker and libraries which the program would work with in user space when copied over to the Jetson.

Libusb is a library in user space…the Jetson isn’t what does or doesn’t support it. If it is installed and in the linker path, then you can say it is available. If you run “ldconfig -p” you can print everything the linker sees…or “ldconfig -p | grep libusb” to be more specific. It is there by default in the sample rootfs. You might ask them if they would be able to just unpack the sample rootfs and use that as sysroot…it is complete other than some boot files and has libusb-1.0 and libusb-0.1.

The advantage of giving them a loopback mountable clone is that it can include all package additions and package updates from the exact environment.

Hi,

In this page https://developer.nvidia.com/embedded/linux-tegra

  • The link titled with “GCC 6.4.1 Tool Chain for 64-bit Kernel” is actually pointed to a source tar, and …
  • The link titled with “Sources for the GCC 6.4.1 Tool Chain for 64-bit Kernel” is actually pointed to the prebuilt toolchain

Q: does it support cross-compiling CUDA program? e.g. on a X86_64 machine with CUDA tool installed

Hi,

For a CUDA program, it’s required to compile it with nvcc and use the corresponding SM configure.

Ex. TX2

nvcc -gencode arch=compute_62,code=sm_62 -gencode arch=compute_62,code=compute_62 -o output.o -c source.cu

Thanks.

Thanks for the reply.

I’ve managed to cross-compile a tensorflow 1.11(from tf branch r1.11), which brings quite a performance boost with “contrib.tensorrt”
(also much verbose during converting - a big help of figuring out which nodes need to be modified for “tensorRT friendly”)

Some feedbacks:

  1. https://github.com/NVIDIA-Jetson/tf_trt_models/blob/master/tf_trt_models/detection.py#L163
    “”"
    for node in frozen_graph.node:
    if ‘NonMaxSuppression’ in node.name:
    node.device = ‘/device:CPU:0’
    “”"
    In tf 1.11, this will cause SEGV during conversion since CPU device don’t have “gpu_device_info”

  2. It will be better if the cross-compile toolchain contains a gold linker
    – with ‘-g’ option enabled, the default bfd linker is unable to finish link while compiling tensorflow (with an error of memory exhausted)

‘-g’ is helpful for debugging

Q: I encountered a SEGV for cmdline binraries of tf (e.g. benchmark_model)
a) it happens during dl_init()

 b) x/10i $pc

=> 0xe7add8 <ZSt9call_onceIRFvvEJEEvRSt9once_flagOT_DpOT0+32>: str x2, [x1,x3]
0xe7addc <ZSt9call_onceIRFvvEJEEvRSt9once_flagOT_DpOT0+36>: ldr x3, 0xe7ae18
0xe7ade0 <ZSt9call_onceIRFvvEJEEvRSt9once_flagOT_DpOT0+40>: adrp x2, 0x99ee000 <_ZTVN10tensorflow12SoftmaxOpGPUIdEE+56>

c) above instructions belong to a source file with extension ".cu.cc" (which is a c++/cuda mixed source file)

It looks like be generated by nvcc, any idea about this?

UPDATE: This can be fixed if link the binary “benchmark_model” with the gold linker

(which comes from gcc-linaro-6.4.1-2017.08

ie, by combining toolchains of “GCC 4.8.5 Tool Chain for 64-bit BSP” and “GCC 6.4.1 Tool Chain for 64-bit Kernel”.
For the later only gold linker is used.)