Adding a new library

Hi,

I am working on updates to the kernel libraries and code for the Xavier NX. This code is being cross compiled on a ubuntu machine.

I want to add in a new library, for example lib-json. I have read through the documentation and have tried a couple methods, but both have failed.

The first thing is I am a little confused with the documentation. That may be the biggest issue is. What is the difference between out-of-tree modules and External Modules? Which one should I be following?

What folder should the source for the new library be extracted in? Is it the kernel source folder or is own folder?

When I try the following to “prepare jetson kernel headers”. What is the <local_src_dir>? Is this the folder containing the source for the new library I want to build or is it the kernel source folder?
$ cd <local_src_dir>

$ tar -xjf /Linux_for_Tegra/kernel/kernel_headers.tbz2

The next questions relates to “Building External Kernel”
The instructions are:
$ cd <path_to_module_source>

$ make ARCH=arm64 –C <kernel_directory> M=$(pwd)

In this example, what should <kernel_directory> point to? Is this the kernel source, the kernel build folder or the rootfs folder?

Thanks for your guidance on this.

Malcolm

Some partial information…

There is a “whole”, or “integrated” kernel…the kernel image. Which coincidentally, when on the filesystem and not used directly from a binary partition, is file “/boot/Image”. This is also a kernel compile target: “make Image”.

Modules are also kernel code, but they can be loaded and unloaded from a running system. They’re a simple file copy to add if you know where to copy them to. Replacing the whole Image file is riskier and has other possible requirements to replace. Not all parts of the kernel can be built as a module, but most of the device drivers can be in the form of a module. Content of the running system’s modules can be seen via “lsmod”, but note that the vast majority of kernel content is not in the form of a module, and thus won’t be visible via lsusb.

On a Jetson the running kernel tells you want its configuration is via the pseudo-file “/boot/config.gz” (it is not a real file, it is part of the kernel running in RAM pretending to be a file). If you were to compile a kernel, then aside from the “CONFIG_LOCALVERSION”, a copy and decompress of “config.gz” would be an exact match for the configuration used to build that kernel. This lets you start modifications via an exact match.

Incidentally, the command “uname -r” results in a combination of the base kernel version, along with the “CONFIG_LOCALVERSION” which was set at the time of compile. The CONFIG_LOCALVERSION suffix is almost universally “-tegra” on Jetsons. The reason why a match of this too is important is that the kernel Image finds its modules at some subdirectory of:
/lib/modules/$(uname -r)/kernel

“Out of tree” is referring to compile, and does not change that it is a module being built. Note that if you were building the entire Image file, then by definition you are “in-tree” because Image is the tree. Modules can be compiled without building the Image file, but then you’d need some extra setup. You could also compile some modules in some separate directory away from the kernel source, but then you’d need more setup to point to the kernel source. This is what is meant by out-of-tree: Your module code being built is in its own private subdirectory somewhere and pointing at the kernel code, versus being in some subdirectory of the kernel code during build. It is still a module regardless of whether it is located in kernel source during compile, versus being elsewhere. Also, a module is still kernel code, it just binds differently.

Quite often third party drivers are shown for “out-of-tree” build. These drivers almost universally expect the kernel headers (which are really the part needed for pointing to during out-of-tree compile) to be in place and to be an exact match of the running system. Those same headers are not necessarily available on a Jetson, at least not configured correctly. What the instructions fail to mention is that if you have full kernel source, and if you’ve configured that source to match your running system, then this is a valid location to point your out-of-tree content to. This works, even on Jetsons.

Note that NVIDIA provides more content when downloading the kernel than does a typical kernel source download. When building kernel source some content can be referred to via a relative path using “../”, which means the parent directory of the current directory. The “TOP” of source is main parent subdirectory of the kernel source, and most compiles would never refer to something outside of kernel source, but the NVIDIA content can refer (via relative paths) to content not in the kernel source itself. Thus some content requires more than just headers to build, even if you are out-of-tree and pointing at full source.

In the case where you do unpack the full NVIDIA version of kernel source, you will have a structure such as that I put below (you could put this content somewhere else, this is just where I put it):

# tree -d -L 2 /usr/src/sources
/usr/src/sources
├── hardware
│   └── nvidia
└── kernel
    ├── kernel-4.9
    ├── nvgpu
    └── nvidia

In that case the “TOP” is “/usr/src/sources/kernel/kernel-4.9”. In cases where you are told to point at kernel headers, then for this example, you could substitute “/usr/src/sources/kernel/kernel-4.9”. You would want to configure that kernel source to match your running system before using it, but otherwise this would work for your compile of out-of-tree content just like kernel headers would work if they were there and configured. On a Jetson, don’t use the kernel headers, use the full source configured to match your current running system. You’ll save yourself some headaches and grief.

So far as commands go for compiling there are two categories: (A) Native compile, directly on a system of the same architecture (e.g., built directly on the Jetson, or perhaps a 64-bit RPi), or (B) cross-compile from a system of one architecture which differs from the destination for the kernel content.

Whenever you see “ARCH=arm64”, then it means you are looking at instructions for cross-compile. You would thus also need the correct cross-compile tools installed since the “native” tools on your system would be for something like x86_64/amd64 and could not result in usable code on arm64/aarch64.

If you compile natively, there are far fewer things to set up. You would never use “ARCH=arm64” if compiling natively on the Jetson…this would trigger some “cross-compile” behavior which would actually make it a pain to compile natively. If you compile natively you can just leave out the “ARCH=arm64”.

The official documents explain cross compile quite well and it isn’t all that difficult, but you’d need to read that once before cross compiling. Those docs do not explain out-of-tree builds, but you would need to set up 100% like what those docs explain, and then add the slight differences for out-of-tree builds. You would still want to unpack the full NVIDIA kernel source somewhere and configure that correctly, and then point at that for your build instead of at some kernel headers (you could use kernel headers if configured correctly, but like I said before, you’re going to be less frustrated if you just install the full source and treat it as headers).

I always advise people to build the full kernel Image first, even if only building modules. I say this because it completes some of the configuration which you’d have to perform separately before performing a module build. You’re more likely have a correct configuration if you successfully build a kernel Image once, and then move on to building modules or out-of-tree content. Think of it as validation.

If I were to add kernel source on my desktop PC for cross compile, then I’d be tempted to unpack it from:
/usr/src/arm64
…which would result in having this:

# tree -d -L 2 /usr/src/sources
/usr/src/arm64/sources
├── hardware
│   └── nvidia
└── kernel
    ├── kernel-4.9
    ├── nvgpu
    └── nvidia

…and this would result in pointing here when it wants headers:
/usr/src/arm64/sources/kernel/kernel-4.9

There are a lot of other details which would simplify your life, but which require setup only once. You’re free to ask for more details at any time, but this would be a much longer reply if I tried to put everything into this first reply.

Almost forgot: Building kernel code is not building a library. The code is part of the kernel (in bare metal “kernel space”). A library is in “user space”, and is not directly part of the kernel.

Hi

Thanks for the detailed reply. Its appreciated.

I did what you suggested in your reply. I extracted the sources without modification and I have created the kernel Image and then I created a disk image using the tools/jetson-disk-image-creator.sh. All is good. The image builds and boots and is usable. I can added compiled binaries into the rootfs and when the disk image boots, the compiled binaries are in the booted image. As mentioned, now I want to add a new library (for example lib-json) to the kernel and have it show up in the disk image.

I have been following this structure from your Jetson_Linux_R32.6.1_aarch64.tbz2

  • Linux_for_Tegra
    |-- kernel
    |-- nv_tools
    |- -rootfs
    |–nv_tegra
    |-- sources
    ├── hardware
    | └── nvidia
    |── kernel
    ├── kernel-4.9
    ├── nvgpu
    └── nvidia

Are you saying that I should create a folder as shown below (see **):

  • Linux_for_Tegra
    |-- kernel
    |-- nv_tools
    |- -rootfs
    |–nv_tegra
    |-- sources
    |-- /usr/src/arm64/sources (** new folders??)
    ├── hardware
    | └── nvidia
    |── kernel
    ├── kernel-4.9
    ├── nvgpu
    └── nvidia

If I have understood this correctly, how do I make sure the kernel compiling and build process picks up the new lib-json?

Thanks
Malcolm

Libraries are never part of a kernel. Libraries are part of the operating system surrounding the kernel. A kernel or kernel module can access hardware directly, but a library only has access to a user’s program, and can only indirectly access hardware by permission of the kernel. So installing something like a JSON library is unrelated to the kernel.

Some trivia, but perhaps useful in this case, is that when the Linux kernel boots, it is process ID 0 (PID 0). The kernel then runs one and only one thing: Init. Init has different layouts on different distributions, but older distributions used actual script files, and newer distributions tend to use something called systemd. Regardless of what it is called it is init (process ID 1) which runs everything the kernel itself is not running. This is truly the operating system, and the kernel is truly its own program which chooses to run the operating system as its first boot task. Library setup is part of the operating system.

Most libraries you will see are “dynamic” libraries. These are basically snippets of code which can be dynamically linked at run time to some program. None of those programs are the kernel. All such programs have only the authority the kernel grants. The part of the operating system which connects a program with a library is the “linker”. If you want to see which libraries your system sees in its default library path, then run this command:
/sbin/ldconfig -p

Adding a library (if properly compiled) is nothing more than copying it to the default linker path. If you want to see the current default search paths:
ld --verbose | grep SEARCH_DIR

You would not normally want to mess up the linker by manually copying files around unless you know what you are doing. Most of the time the worst that would happen is that the file you are adding doesn’t work, but if the wrong thing happens, then there is a small possibility the system would no longer boot correctly (or some programs would not function).

Let’s say that you want to add a library which is published on a package server, then you’d just use “sudo apt-get the_package_name” to install it. If the library is something you personally created, then a typical location for custom libraries is “/usr/local/lib”, and you might just copy the file there, and tell the linker about it via “sudo ldconfig” (then it would show up in “ldconfig -p”). Instead of running “sudo ldconfig” you could also reboot. I prefer using “sudo ldconfig” just in case something went wrong…prior to rebooting you’d have a chance to remove a bad file.

The files which are provided by flash (from the host PC side) are in the “Linux_for_Tegra/rootfs/” directory. In my example above, if you were to copy a library named “libhello.so” to “Linux_for_Tegra/rootfs/usr/local/lib”, then the file would be there after flash.

I want to emphasize that adding a library has nothing to do with the kernel or kernel modules or device drivers. What exactly do you want to add? If it is a library it is a completely different topic versus a driver (although some libraries need certain features in the kernel to be enabled).

NOTE: Perhaps what you want is a standard package, in which case you might search for it and see possibilities like this:
apt search libjson

Hi,

Thanks for your response. I might have misused some of the terminology, but your explanation is great.

Let me start by describing what has to be added:

  1. AC9260 driver . This is a backport and I have the sources. Driver:

[kernel/git/iwlwifi/backport-iwlwifi.git - Backport tree for iwlwifi](https://urldefense.com/v3/__https:/git.kernel.org/pub/scm/linux/kernel/git/iwlwifi/backport

  1. A custom application that we have developed. I would think it would go into /usr/bin.

There will be multiple releases and updates to this application code over time and there will be addition drivers (possible upgrades over time) as new interfaces are added.

How would you recommend these be compiled and integrated into the Nvidia NX kernel/rootfs environment?

Malcolm

If you plan on multiple releases and support over time, then I’d recommend cross-compiling from the host PC. Ideally, you would install the full kernel source on the PC as root-only write (anyone could read, but you don’t want stray configs for other people working on a kernel build).

The kernel source itself could change over time, and so I would recommend unpacking the kernel source yourself to an obvious “non-automatic update” location (it is possible SDKM would update, not sure, but no need to take the chance). For the example I will pretend the kernel version is 4.9.140, adjust for your actual release. I’d unpack kernel source at:
/usr/local/src/kernel/aarch64/4.9.140
(this directory should be owned by root, readable by everyone, but writable only by root)

Unpacking the kernel source there would result in creating this (and subdirectories) within /usr/local/src/kernel/aarch64/4.9.140:

.
├── hardware
│   └── nvidia
└── kernel
    ├── kernel-4.9
    ├── nvgpu
    └── nvidia

I would assign an environment variable to the top of this:
export TOP='/usr/local/src/kernel/aarch64/4.9.140'/kernel/kernel-4.9

I would also make sure this source is pristine (only needed once):

cd $TOP
sudo make mrproper

From there I would see how to set up and build the Image and modules in separate locations which reference this content even if you only plan to compile from outside of that tree. An example of compiling the Image (and some other content) to an outside location follows (adjust for your situation)…

You already have “$TOP” set up. Now set up for temporary locations owned by your regular user, and cross compile tools. You can adjust these to your case, e.g., maybe you use a different cross compiler (see the official docs on kernel customization…these list the easiest way to get cross tools of the correct version) or different temp location. Keep in mind that after each build you can delete temp locations and replace them with empty directories if you want to start over or be certain nothing was kept from a previous build.

Examples:

# For cross-compile you'll always name the ARCH:
export ARCH=arm64

# Assumes tools have some "standard" cross compile tools in "`/usr/bin`"
# with prefix "`aarch64-linux-gnu-`":
export CROSS_COMPILE=/usr/bin/aarch64-linux-gnu-

# A somewhat arbitrary location for an individual temporary output of
# compile...be sure you've created this, e.g., with "mkdir".
export TEGRA_KERNEL_OUT="~/kernel/temp"

# Now add a temp location for make targets which install things:
export TEGRA_MODULES_OUT="~/kernel/modules"
export TEGRA_FIRMWARE_OUT="~/kernel/firmware"

# If you ever want to start over, then only the temp locations need to be
# wiped out and recreated as empty directories (be very careful to name
# the right location when performing a recursive rm...notice how this
# does not use sudo):
rm -Rf $TEGRA_KERNEL_OUT
rm -Rf $TEGRA_MODULES_OUT
rm -Rf $TEGRA_FIRMWARE_OUT
mkdir -p $TEGRA_KERNEL_OUT
mkdir -p $TEGRA_MODULES_OUT
mkdir -p $TEGRA_FIRMWARE_OUT

The official docs explain the above, this is just customized slightly for your case. You’ll need to make sure to remember during an actual compile that the “.config” file goes to “$TEGRA_KERNEL_OUT”, and that every command you use should refer to “O=$TEGRA_KERNEL_OUT”, including commands for tools to configure the kernel.

A typical compile of full source is as follows, but this will only work if you’ve set up those environment variables and temp locations:

cd $TOP
# Note that because we used "export" that ARCH and some other above are
# automatically part of this command even though we're not typing it in every
# time.
make O=$TEGRA_KERNEL_OUT tegra_defconfig
# Let's pretend we want to change something from the default config:
make O=$TEGRA_KERNEL_OUT menuconfig
# Now we want to build the kernel...I'll assume you use 6 CPU cores, but
# "-j 6" could be any other value:
make -j 6 O=$TEGRA_KERNEL_OUT Image

# Now we make and install modules:
make -j 6 O=$TEGRA_KERNEL_OUT modules
# This populates "$TEGRA_MODULES_OUT":
make -j 6 O=$TEGRA_KERNEL_OUT modules_install

# Similar, build firmware and install, which populates "$TEGRA_FIRMWARE_OUT",
# but you will want to explicitly name firmware paths:
make -j 6 O=$TEGRA_KERNEL_OUT firmware_install INSTALL_FW_PATH=$TEGRA_FIRMWARE_OUT

If the above is working, then you can easily change just the “$TOP” content if you need to modify it.

I don’t know your exact situation, so if you build on this, then you can ask more questions as they show up. Do try to cross-compile the Image file and modules once just to see that it works.

I also advise to check every single build if you’ve set up “CONFIG_LOCALVERSION” correctly. Normally, if you want to use the existing module location, then this has to match. Most default Jetson installs use:
CONFIG_LOCALVERSION="-tegra"

I also recommend not overwriting the original Image file on the Jetson until you know the new one works. You could just make a new extlinux.conf entry and copy the file with a slightly modified name, e.g., when you put Image in “/boot” of the Jetson, you could name it “Image-1”, or with “-2”, so on. There are just so many variations on how to do this it is hard to answer, so just ask when you get to a question.