Configuring the Jetson TX2 for a Custom Carrier Board

Hello, I wish to configure the Jetson TX2 so that it is compatible with my custom carrier board. I followed the reference design schematics so that I can use configuration #2 where lanes 1-4 as PCIE and lane 0 as USB-SS. When I install the Jetson module on my carrier board the USB-SS does not work. I managed to remote into the Jetson and determined that the PCIe controller is configured as 4x1, 1x1. I suspect that this has something to do with the Jetson not detecting the EE prom and therefore loads a default configuration.

I am trying to navigate the Nvidia documentation and having some difficulty following the process. Steps I have taken:

  • So far I have configured the Pinmux excel spreadsheet, and produced the .cfg files from that.
  • located .cfg, .dts, and .dtsi files in the source and made my modifications
  • I have reflashed the device and no changes took affect.

I am guessing that there is a step that I am missing in this process.
The latest adaptation guide jumps from configuring UPHY to Flashing the device. Do I need to cross compile the kernel for the changes in my config to take affect? Is it imperative that I have unique board name in the files to differentiate them from the known/default?

Thanks.

I suspect that this has something to do with the Jetson not detecting the EE prom and therefore loads a default configuration.

Wayne: Yes, you are correct.

What you need to do is comment out the plugin-manager and change the device tree.
https://elinux.org/Jetson/TX2_USB -> usb lane mapping.

Hello, after reading many forums and hacking away in terminal today, I have determined that the following 5 steps are necessary in order to change the device tree for a custom carrier board.

  1. Make your desired changes to the .dts, .dtsi, and .conf files
  2. Recompile the "Linux_for_Tegra/sources/kernel/kernel-4.9" directory using a Lintaro toolchain - (since that is functional with arm processors), details on that can be found here: https://docs.nvidia.com/jetson/l4t/index.html#page/Tegra%2520Linux%2520Driver%2520Package%2520Development%2520Guide%2Fkernel_custom.html.
  3. Run apply_binaries.sh within the Tegra_for_Linux folder in order to create an Image from those cross-compiled changes
  4. Run flash.sh within the Tegra_for_Linux folder to flash the Jetson TX2 with your custom changes
  5. Attempt to boot to confirm the desired changes took place.

I’m stuck on step 2.

I can run (where $ARCH=arm64 and $TEGRA_KERNEL_OUT=/whatever/output/directory)

sudo make ARCH=$ARCH O=$TEGRA_KERNEL_OUT tegra_defconfig

which (to the best of my knowledge) fills the .config file within the designated $TEGRA_KERNEL_OUT folder with default values. I can also run the following command with no issue:

sudo make ARCH=$ARCH O=$TEGRA_KERNEL_OUT menuconfig

However, we I attempt to run

sudo make ARCH=arm64 O=$TEGRA_KERNEL_OUT -j1

I receive the following error:

gcc: error: unrecognized command line option ‘-mlittle-endian’; did you mean ‘-fconvert=little-endian’?

Virtually every forum I have come across has said that this occurred because I was using a x86 cross-compiler instead of an arm cross-compiler. But, like I stated before, I am using a Lintario toolchain - specifically the one found here: https://releases.linaro.org/components/toolchain/binaries/7.3-2018.05/aarch64-linux-gnu/ - and am still receiving the little endian error.

How can I avoid this error so I can complete the cross-compilation, and move on to step #3?

Thanks,
Ben

Posting in here again just hoping someone can answer my above question.

Thanks,
Ben

We are checking it. Thanks for your patience.

Hi ben.bole,

Please make sure you are doing below process:

export TEGRA_KERNEL_OUT=<outdir>
export CROSS_COMPILE=<gccdir>/gcc-linaro-7.3.1-2018.05-x86_64_aarch64-linux-gnu/bin/aarch64-linux-gnu-
export LOCALVERSION=-tegra
cd public_sources/kernel/kernel-4.9/
make ARCH=arm64 O=$TEGRA_KERNEL_OUT tegra_defconfig
make ARCH=arm64 O=$TEGRA_KERNEL_OUT -j4

I am executing the following commands as instructed and encountering the following error:

gcc: error: unrecognized command line option ‘-mlittle-endian’; did you mean ‘-fconvert=little-endian’?

This happens directly after the cross-compiler encounters the empty.o object file. Upon reading empty.o’s human readable counterpart, empty.c, I discovered that the empty.o is used to determine endian-ness of the host system. I am also closely monitoring the following thread: https://devtalk.nvidia.com/default/topic/1061770/jetson-tx2/jetson-tx2-compiling-kernel/. Which has an identical issue. I tried executing the commands which LinuxDev detailed (using both the cross-compiler described in the thread and the Linario cross-compiler recommended by NVIDIA), and I still encounter the above issue. This leads me to believe that it is a some kind of a problem with my host machine, which has the following specs:

PC: ASUS i86_64 Processor
OS: Ubuntu 18.04
Jetpack: 4.2.1
L4T: 4.9
Make: built for x86_64-PC-Linux-GNU
Toolchain: Linaro 7.3.1 2018.05

Would the version of my Make software impact the cross-compilation of the kernel? Is this an issue with my host machine?

Thanks,
Ben

All,

I have discovered how to cross-compile the kernel on the above machine. The following commands should be executed in the following order, within the Tegra_for_Linux/sources/kernel/kernel-4.9 directory, assuming that $TEGRA_KERNEL_OUT == your/output/dir, $CROSS_COMPILE == linario/cross/compiler/location/prefix, $LOCAL_VERSION=-tegra, $TEGRA_BASE=linario/cross/compiler/location/prefix (should look something like this /usr/gcc-linario/bin/aarch64-linux-gnu-)

sudo make mrproper
sudo make ARCH=arm64 O=$TEGRA_KERNEL_OUT CROSS_COMPILE=$CROSS_COMPILE tegra_defconfig
sudo make ARCH=arm64 O=$TEGRA_KERNEL_OUT CROSS_COMPILE=$CROSS_COMPILE -j4

The Makefile details that you can set CROSS_COMPILE in the environment and it will use the environment variable. This is false. You must set it yourself when executing the command. You can also put whatever 2^n power after j, limited by your number of cores.

As this process continues, I will continue to post here to attempt to fill in some of the gaps within the NVIDIA documentation.

Thanks,
Ben

1 Like