Recompile Jetson Nano Developer Kit Kernel

I need to recompile the Jetson Nano developer kit’s kernel to support RFCOMM TYY, so that I can get it to communicate with an ESP32 over bluetooth.

I followed this guide, like I saw in other posts https://blog.hypriot.com/post/nvidia-jetson-nano-build-kernel-docker-optimized/

However, the nano failed to booth with the recompiled image and must’ve corrupted the nano because It wouldn’t reboot after replacing the image with the original.

I’ve reflashed the nano. I wanted to follow official NVIDIA documentation this time, but the documentation does not include the nano in its solution, and other methods that include the sdkmanager do not work because when I plug in the nano, all I get are greyed out install options:

┏ Install options ------------------------------ ┓

  • Select action: Install
  • Select product: Jetson
  • Select system configuration: Host Machine [Ubuntu 22.04 - x86_64], Target Hardware
  • Select target hardware: Jetson Nano modules [ Jetson Nano [developer kit version] Detected ]
  • Select target operating system: Linux
  • Select SDK version: (No available option, press to go back)
    ────────────────────────────────
    JetPack 6.0 DP [ Not available for Jetson Nano modules ]
    JetPack 5.1.2 [ Not available for Jetson Nano modules ]
    JetPack 5.1.1 (rev. 1) [ Not available for Jetson Nano modules ]
    JetPack 5.1 (rev. 1) [ Not available for Jetson Nano modules ]
    JetPack 5.0.2 (rev. 2) [ Not available for Jetson Nano modules ]
    JetPack 5.0.2 Runtime (rev. 2) [ Not available for Jetson Nano modules ]
    JetPack 4.6.4 [ Available on host OS: Ubuntu 16.04, Ubuntu 18.04 ]
    JetPack 4.6.3 [ Available on host OS: Ubuntu 16.04, Ubuntu 18.04 ]
    JetPack 4.6.2 [ Available on host OS: Ubuntu 16.04, Ubuntu 18.04 ]

What is a safe way to recompile the kernel?

For any official jetson document, search “l4t archive” instead of some other posts on Internet.

Click the version you want to use and there would be developer guide document inside.

Check the “Kernel customization” section and it will tell you how to build kernel.

The official document is good at building kernels. There might be some easier install methods “sometimes” (often?) other than flashing. Most people though run into trouble with configuration. If you don’t understand configuration (which is actually somewhat simple, but not always obvious), then you will likely fail. You might find this useful:

Some more on kernels:
https://forums.developer.nvidia.com/t/topic/238718/25

Keep in mind that you should use the exact kernel source that works with the currently running system. This is part of what @WayneWWW is getting at mentioning the Jetson archive. Each L4T release (which is just Ubuntu plus NVIDIA drivers; check “head -n 1 /etc/nv_tegra_release”) is tied to a specific kernel, and that kernel and documentation is available for each release found here:
https://developer.nvidia.com/linux-tegra

Thank you. I was not aware the archive contained documentation for the respective linux versions, though that is obvious in retrospect. The background information is also helpful.

The linux version is 32.7.4, so I will try the method on this page https://docs.nvidia.com/jetson/archives/l4t-archived/l4t-3274/index.html#page/Tegra%20Linux%20Driver%20Package%20Development%20Guide/kernel_custom.html#wwpID0E0OE0HA and report its outcome.

I’ve followed this guide up to step 5 and succesfully created the boot Image. However, I’m unsure what it’s referring to by <release_packagep>/Linux_for_Tegra/kernel/Image in the next steps.

I have the folder ~/Linux4Tegra from extracting the public_sources archive, but it does not contain a kernel/ folder, and I don’t know where <release_packagep> comes from.

The hypriot guide I linked earlier says to replace /boot/Image with the created Image. Is this somehow related?

To build the Jetson Linux Kernel

1.Set the shell variable with the command:

$ TEGRA_KERNEL_OUT=

Where:

• is the desired destination for the compiled kernel.

2.If cross-compiling on a non-Jetson system, export the following environment variables:

$ export CROSS_COMPILE=<cross_prefix>

$ export LOCALVERSION=-tegra

Where:

•<cross_prefix> is the absolute path of the ARM64 toolchain without the gcc suffix. For example, for the reference ARM64 toolchain, <cross_prefix> is:

<toolchain_install_path>/bin/aarch64-linux-gnu-

See The L4T Toolchain for information on how to download and build the reference toolchains.

Note: NVIDIA recommends using the Linaro 7.3.1 2018.05 toolchain.

3.Execute the following commands to create the .config file:

$ cd <kernel_source>

$ mkdir -p $TEGRA_KERNEL_OUT

$ make ARCH=arm64 O=$TEGRA_KERNEL_OUT tegra_defconfig

Where:

•<kernel_source> directory contains the kernel sources (e.g. kernel-4.9).

4.Execute the following commands to build the kernel including all DTBs and modules:

$ make ARCH=arm64 O=$TEGRA_KERNEL_OUT -j

Where indicates the number of parallel processes to be used. A typical value is the number of CPUs in your system.

5.Replace <release_packagep>/Linux_for_Tegra/kernel/Image with a copy of:

$TEGRA_KERNEL_OUT/arch/arm64/boot/Image

6.Replace the contents of Linux_for_Tegra/kernel/dtb/ with the contents of:

$TEGRA_KERNEL_OUT/arch/arm64/boot/dts/

7.Execute the following commands to install the kernel modules:

$ sudo make ARCH=arm64 O=$TEGRA_KERNEL_OUT modules_install \

INSTALL_MOD_PATH=/Linux_for_Tegra/rootfs/

8.Optionally, archive the installed kernel modules using the following command:

$ cd <modules_install_path>

$ tar --owner root --group root -cjf kernel_supplements.tbz2 \

lib/modules

The installed modules can be used to provide the contents of /lib/modules/<kernel_version> on the target system.

Use the archive to replace the one in the kernel directory of the extracted release package prior to running apply_binaries.sh:

Linux_for_Tegra/kernel/kernel_supplements.tbz2

If you don’t know what is Linux_for_Tegra and you are using jetson nano, then I guess you didn’t flash your board by using flash.sh or sdkmanager before, right?

That is correct. I flashed the sd with the jetpack 4.6.1 image as per this guide https://developer.nvidia.com/embedded/learn/get-started-jetson-nano-devkit#write
then upgraded to 4.6.4 through apt

  1. The document we are talking about here is actually doing cross compile from another x86 ubuntu PC but not building on jetson.

  2. Linux_for_Tegra is on the x86 PC but not jetson. And it is actually running based on our official tool.

  3. If you don’t want to learn how to flash the board in official way, then you could just replace the file on jetson. For example, kernel image on jetson is /boot/Image.
    Kernel modules are under /lib/modules.

  4. I still suggest you should learn the official way.

I see. The document has a step with the phrase “if you are cross-compiling, do X,” so I figured it worked for either method.

I do not have an x86 pc. Would replacing the images and modules directly be safe or require additional configuration to work (env variables, etc.), and is there a document detailing the non-cross-compiled method?

This is about the packaging and extracting of files in tar archives (the source as a whole is a tar archive, and this contains another tar archive which is the kernel itself).

Incidentally, whenever JetPack/SDK Manager is used on the host PC for flashing the Jetson, it creates this directory:
~/nvidia/nvidia_sdk/JetPack...version.../Linux_for_Tegra/

That “<release_package>” is named after the JetPack version, and is what determines my notation “JetPack...version...”. This is where you find things labeled with “Linux_for_Tegra/”. The “Linux_for_Tegra/” content, aside from what populates the “Linux_for_Tegra/rootfs/”, is the “driver package” which performs the flash. The “Linux_for_Tegra/rootfs/” is a purely Ubuntu filesystem with NVIDIA drivers on top of it. Just go to “cd ~nvidia/nvida_sdk”, and then run “ls” to see which L4T releases your flash has set up for.

I would be careful about installing the kernel with the copy to “/boot/Image”, but here is a short list of what is going on…

The Image is the uncompressed kernel. The kernel normally resides on the Jetson at “/boot/Image”, although flashing will also put the kernel in eMMC for an eMMC model (the “/boot/Image” takes priority over the partition version if security fuses are not burned). The command “uname -r” depends on the combination of the kernel version and the setting of CONFIG_LOCALVERSION during its build, and modules are searched for (by that specific kernel) at “/lib/modules/$(uname -r)/kernel”. Sometimes you can just build a module and copy that in. Once you copy a new Image in things get more complicated (not terribly complicated, but you are now at risk of not being able to boot if it goes wrong). Flashing to add the kernel typically is needed if your kernel is going into a partition.

So far as getting the source goes, on the R32.4.7 page, the Nano source is the “Driver Package (BSP) Sources”, which provides a file with a lot of other content. Only the subcontent of “public_sources.tbz2” is the kernel source. It is a package within a package.

To list what is in the original public_sources.tbz2 you can do this:
tar --list -j -f ./public_sources.tbz2

The “--list” shows content instead of extracting. The “-j” says this is bzip2 compressed file (the .tbz2 is a contraction of .tar.bz2; they are the same). The -f is for naming a file. Note that the file within this file which you are interested in, and which is the kernel source, is:
kernel_src.tbz2

A very important note is that kernel_src.tbz2 needs the full path within the archive named, so what you really extract is:
Linux_for_Tegra/source/public/kernel_src.tbz2

You can extract that from the original, and then extract everything from kernel_src.tbz2 to get the actual source, and quoting the name of that file with single quotes to avoid the / becoming issues at times:

# Subpackage:
tar -jvx -f public_sources.tbz2 'Linux_for_Tegra/source/public/kernel_src.tbz2'

# You could now move `kernel_src.tbz2` somewhere else, but if you were in
# "~/nvidia/nvidia_sdk/JetPack...version.../" when you extract, then this would
# automatically place the kernel source file at
# "./Linux_for_Tegra/source/public/kernel_src.tbz2"

# Regardless of where it is, the kernel source itself can be extracted:
cd Linux_for_Tegra/source/public
tar xjvf ./kernel_src.tbz2

# You could have used mv on kernel_src.tbz2 instead of cd to there.

(x is extract, v is verbose, j is again for bzip2; I’m naming the file to be extracted rather than extracting everything in the first command; second command just extracts all from kernel_src.tbz2)

Note that kernel source in itself does not require root ownership. It works perfectly well as a regular user. However, if the original source is owned by root, and all modification and change and temporary content is to a separate location, then you have a guarantee that your source is pristine at all times. The “--owner root --group root” isn’t really necessary, but when unpacked to “Linux_for_Tegra/rootfs/”, you will find the destination requires root. You only need to unpack to that location if you are including it as part of flash. When you are updating after flash, manually adding or editing a kernel, you don’t need that extraction location. You could just use the command I gave. You can prefix unpacking of the kerel_src with "sudo …`" if you want it to have root ownership,

Please note that if you’ve flashed once via JetPack, then apply_binaries.sh has already been run and it doesn’t need to be run again in most circumstances (this is what overlays the NVIDIA content onto a previously pure Ubuntu rootfs). In cases where the “Linux_for_Tegra/” has bee updated with new alternate content which apply_binaries.sh would add, then it might be useful for adding your customization. For a running Jetson none of this is needed since you’re just copying a file to the Jetson. The case where this is most useful is if you want to flash a number of Jetsons with this modification, e.g., for manufacturing. For your case you can just unpack the kernel source as a whole in some custom location, build, and then make copies of content to the Jetson rather than using the special unpacking.

The part the documents are good at is the cross compile itself. It tells you what to install on the host PC, and the steps to take. For that you just unpack the kernel source to your favorite location without the special setup.

Cross compiling is the same as regular compile except for the environment settings and the cross tools. Some notes on comparison:

  • You set ARCH to arm64 for cross-compile, and do not set this during native compile.
  • You don’t set CROSS_COMPILE on native compile.
  • You probably do want to use an alternate temporary output location via “O=/some/where” in both cases.
  • You probably need to install “sudo apt-get install libncurses5-dev” in both cases.
  • You will want a TEGRA_MODULES_OUT location for some temporary module output in both cases. No difference.

I’ll give an example to illustrate native compile. I will assume you have this temporary location, which is a bad idea if you do not have enough disk space on the Jetson, but which is perfect if you have the disk space (disk space is a big reason to cross-compile):

mkdir ~/temporary
mkdir ~/temporary/kernel
mkdir ~/temporary/modules

export TEGRA_KERNEL_OUT=~/temporary/kernel
export TEGRA_MODULES_OUT=~/temporary/modules

# Now just follow build steps. You could make sure the original source
# is pristine:
cd /where/ever/kernel/source/is

# `pwd` just echos where you are at the moment. TOP is the traditional
# name for the root of any source compile.
export TOP=`pwd`

# This makes that location pristine. You want it owned by root. You won't
# use sudo anymore after this.
sudo make mrproper

make O=$TEGRA_KERNEL_OUT tegra_defconfig

# Note that the "-j #" option says how many cores to use. If you have 6 cores,
# then probably you'd use "-j 6".

# This nconfig is just if you want adjustments over the default config. Use
# the "m" key for module format if it is available:
make O=$TEGRA_KERNEL_OUT nconfig

# Don't forget that in nconfig you can edit CONFIG_LOCALVERSION. nconfig
# has a symbol search function, and you could search for "localversion" (search
# is case insensitive and knows about the prefixed "CONFIG_". You could set
# CONFIG_LOCALVERSION to "-tegra" if you are only adding modules. If modifying
# integrated features with "=y", then there is more to the story.

make O=$TEGRA_KERNEL_OUT -j 6 Image
make O=$TEGRA_KERNEL_OUT -j 6 modules
make O=$TEGRA_KERNEL_OUT modules_install INSTALL_MOD_PATH=$TEGRA_MODULES_OUT

# Now look for Image:
cd $TEGRA_KERNEL_OUT
find . -name Image

# Or just browse the module location; this includes the full path, which mirrors
# where anything would be copied to. You only need to copy one file if you
# just build a new module, but all modules might be listed. Ignore any that you
# did not add:
cd $TEGRA_MODULES_OUT
cd lib/modules
# There will be a subdirectory named after what this kernel Image expects.
# This is just an example, it might differ for you:
cd 4.9.140-tegra
cd kernel
find . -name '*.ko'

Just remember that the biggest issue on native compile is having enough disk space. You probably want at least 5 GB spare space, and that is after you’ve copied the source code in. You could copy the kernel_src.tbz2 in and unpack it, then check:
df -H -T -t ext4
(this lists ext4 partitions; RAM pseudo partitions don’t count; if you have only eMMC or only SD card, then this is the same: “df -H -T /”)

Also to emphasize, you don’t need ARCH or CROSS_COMPILE when natively compiling. In fact, even if you name ARCH with the architecture of the native system, it is likely that the code will think arm64 is foreign. You don’t want that when working on arm64.

Thank you that is very clear. One thing though:

“You could set CONFIG_LOCALVERSION to “-tegra” if you are only adding modules. If modifying ntegrated features with “=y”, then there is more to the story.”

I don’t believe I’m adding any modules, just modifying “RFCOMM TTY” to “=y” So what more is there to the story. Do I still need to set the localversion?

Short answer: You always need to set CONFIG_LOCALVERSION. The value can be arbitrary, and my example will use “-rfcomm”, but missing a correct CONFIG_LOCALVERSION implies losing all kernel module drivers. The kernel cannot find modules if this is not correct. A kernel would also use the wrong modules if this remains the same and certain things change without changing the CONFIG_LOCALVERSION.

Can you use an editor like menuconfig or nconfig to set this “=m”? That would greatly simplify installation. Do be certain to not edit the .config file directly with an ordinary editor.

In the case where you do integrate this with “=y”, then you want a new CONFIG_LOCALVERSION, and you also will need to build all modules, and install both kernel Image and all modules. The location of the install of the modules will change. You can, for safety reasons, leave the old kernel Image there, and change a boot entry such that serial console can pick alternate entries if something goes wrong. This isn’t actually as bad as it sounds in the case of an SD card model Jetson since you can unplug the SD card from the Jetson and plug it into a host PC to fix. Then again, this too can change if an initrd is used.

I would suggest something like:
CONFIG_LOCALVERSION="-rfcomm"

I don’t know what your kernel version is, but let’s pretend it is 4.9.140, and that the existing or previous kernel responds to “uname -r” with “4.9.140-tegra” (which would be because 4.9.140 was compiled with CONFIG_LOCALVERSION=-tegra). You will have:

  • /boot/Image
  • /lib/modules/4.9.140-tegra/kernel/

A boot entry in “/boot/extlinux/extlinux.conf” will name:
LINUX /boot/Image

Thus boot will use “/boot/Image”, and modules will be found based on its “uname -r”. To add a new kernel, with an alternate entry, using the example version I mention above, you would also add:

  • /boot/Image-rfcomm
    (the suffix doesn’t actually do anything, it is just that I don’t want to overwrite the original kernel)
  • /lib/modules/4.9.140-rfcomm
  • A new entry in extlinux.conf.

This would leave both kernels present, and the old kernel would be the default, at least for testing. You’d have to install all content of modules in that location, and copy the renamed Image to the new name. Then, if your extlinux.conf looks like this, you’d extend it with the part at the end of the file:

TIMEOUT 30
DEFAULT primary

MENU TITLE p2771-0000 eMMC boot options

# This is the original default.
LABEL primary
      MENU LABEL primary kernel
      LINUX /boot/Image
      APPEND ${cbootargs} root=/dev/mmcblk0p1 rw rootwait rootfstype=ext4

# This is brand new, and an edit:
LABEL rfcomm
      MENU LABEL rfcomm
      LINUX /boot/Image-rfcomm
      APPEND ${cbootargs} root=/dev/mmcblk0p1 rw rootwait rootfstype=ext4

In that extlinux.conf above:

  • The “LINUX” key/value pair names the kernel itself. We did not overwrite the original Image file, it is still there.
  • The “LABEL” is not visible to the user, this is what the bootloader looks for internally. You need a new LABEL for the non-default entry, and I just used rfcomm because of obvious reasons.
  • The “MENU LABEL” is visible to the end user. When serial console is stopped to pick a new kernel during boot, it is numbered, and you pick by number, but that MENU LABEL is visible beside the number. So entry 1 will show “primary”, and entry 2 will show “rfcomm”.
  • I’m not sure if TIMEOUT is actually honored. But 30 seconds is a good value while testing. It might actually be that you have less than two seconds to hit a button at the right moment to interrupt boot and enter a different kernel choice (choice “2” for our example is the new kernel).

If you have serial console attached, and you pick 2, and something goes wrong, then you still have 1. If things work well, then you could make the new entry the first entry, and then change DEFAULT to rfcomm. If anything ever goes wrong, you still have a second kernel, e.g., if some update does something unexpected.

Really though, since this is on an SD card, you could leave the old content there (the Image file), but renamed, and leave the old modules in place, and simply make Image-rfcomm the default. I recommend always adding a new entry when you can (eMMC models with security fuses burned cannot do this; an initrd for booting external media will also alter the procedure). If space is an issue, then you could just directly replace Image, but I’d still recommend renaming it Image-rfcomm so it is never mistaken for the stock kernel (and you’d also have to edit LINUX to name Image-rfcomm in extlinux.conf).

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.