Can anyone help me to find file and folder while build kernel?

Hi, everyone.

I’ve already ask about SPI function on AGX.

(Here is the adress of Q :
Is there anyone who can tell or teach me how to use SPI on AGX Xavier? )

ShaneCCC suggest me that I should have start at ‘build custom kernel’.
So, I starting ‘https://docs.nvidia.com/jetson/l4t/index.html#page/Tegra%20Linux%20Driver%20Package%20Development%20Guide%2Fkernel_custom.html%23’ this documents.

But I encount problem in ’ Building the NVIDIA Kernel’ step.

###################################
5.Replace <release_packagep>/Linux_for_Tegra/kernel/Image with a copy of:

$TEGRA_KERNEL_OUT/arch/arm64/boot/Image

6.Replace the contents of Linux_for_Tegra/kernel/dtb/ with the contents of:

$TEGRA_KERNEL_OUT/arch/arm64/boot/dts/
###################################

I can’t find /Linux_for_Tegra/kernel/Image file.
and there is no /Linux_for_Tegra/kernel/ folder if I follow what document said.
(/Linux_for_Tegra/source/public/kernel/kernel-4.9/ is right folder path I think. Such things make confuse someone like me who are new to linux.)

And also there is no Linux_for_Tegra/kernel/dtb/ folder as well.
(FYI, I can find TEGRA_KERNEL_OUT/arch/arm64/boot/Image & TEGRA_KERNEL_OUT/arch/arm64/boot/dts/)

Please anyone can tell me where is image file to replace?

And if any NVIDIA Employees check this question, please fix the document to avoid misunderstand.

Thanks to read.
I hope I can get reply!

The content of “Linux_for_Tegra/” exists whenever the “driver package” has been installed on your host PC. If you ever flashed using JetPack/SDK Manager, then this would have unpacked the driver package and created this for you somewhere in “~/nvidia/nvidia_sdk/Jetpack...something.../Linux_for_Tegra/”, where “something” is named after the particular JetPack release and a designation for the type of Jetson you flashed. The Xavier is referred to as “P2888”, and so this would become something such as this example:
~nvidia/nvidia_sdk/JetPack4.2_Linux_P2888/Linux_for_Tegra/

Note that you can download and install the driver package separately. A list of releases (since you want to match releases when updating a part of an existing install with updates) is here (you may need to log in and go there a second time):
https://developer.nvidia.com/linux-tegra
…or the JetPack/SDKM which works with that release:
https://developer.nvidia.com/embedded/jetpack-archive

Within this the content of “Linux_for_Tegra/kernel/” is part of what gets copied into the final boot image to be flashed. So if you were to replace “Image” within this with your version, then this would be flashed (be sure to save a backup of the original in case it doesn’t work). Similarly, device tree files may exist either in the “Linux_for_Tegra/kernel/” location or the “Linux_for_Tegra/bootloader/” directory tree, and if you modify one and put it back in place where it was, then the modifications get flashed if your particular board uses that file.

All of the “$TEGRA_KERNEL_OUT/” is part of a kernel build and is not part of the “Linux_for_Tegra/” content. It is up to you to build content which ends up in “$TEGRA_KERNEL_OUT/”, and then copy it to the right location within “Linux_for_Tegra/” (saving a backup of any original). The Image file is the actual kernel.

Within a kernel source tree, after you configure and make (various make options depending on what you wish to make), then the content you just created with that configuration should show up in “$TEGRA_KERNEL_OUT”. Don’t forget to set up where “$TEGRA_KERNEL_OUT” is, otherwise you’ll get unexpected “bad things” when it tries to put the output in the root of your host PC file system.

So if you have flashed, then there will exist a “Linux_for_Tegra/kernel/Image” file, but this won’t be your modified kernel. However, the “find” command will help you greatly to find individual files within a big kernel tree. Imagine you have finished “make O=$TEGRA_KERNEL_OUT Image”, and that “$TEGRA_KERNEL_OUT” was some clean work area you had set up. To find Image:
find $TEGRA_KERNEL_OUT -name 'Image'

Or, if you were looking for device tree dtb files:
find $TEGRA_KERNEL_OUT -name '*.dtb'

It just happens that if your build failed, then those won’t exist. You can always ask for more details after experimenting with this and you have a more specific question.

Note that the official docs refer to cross compile from an Ubuntu host PC, but you can also natively compile from the Xavier. Instructions vary slightly depending on which you do. You might find this of use, but be careful to note this is old and for a TX2:
https://forums.developer.nvidia.com/t/tx2i-wifi-support/63839/2

Just ask again if you run into something you want more information about. Here is something in general about the world of building kernels, modules, and device trees (which was written for a TX1, but applies to anything):
https://forums.developer.nvidia.com/t/about-kernel/77995/18

2 Likes

Thanks! It helps much.
I will try and check what you explain.

Finally I know what step 5 and 6 means.
In my case, I build kernel at AGX board.
So, I need to copy image file and dts folder from AGX board $TEGRA_KERNEL_OUT to host ubuntu desktop folder.("…/nvidia/nvidia_sdk/JetPack_4.4_DP_Linux_DP_JETSON_AGX_XAVIER/Linux_for_Tegra/kernel" is my location).

And next step(7) tell me that
####################################
Execute the following commands to install the kernel modules:
$ sudo make ARCH=arm64 O=$TEGRA_KERNEL_OUT modules_install
INSTALL_MOD_PATH=/Linux_for_Tegra/rootfs/
####################################
Execute the commands in AGX Board, right?

Here is what I don’t know.(I use SDK Manager to flash Jetpack)

  1. Configure some source and build the kernel(at AGX board in my case).
  2. Move builded image file and dts folder to host computer to re-flash AGX board.

And what is next?
Step 7 tell me go back to AGX board and execute “make” command at AGX. It is nonsense to me.
I think after move files to host computer, right next step is re-flash AGX Jetpack using SDK Manager.
But documents said…do nothing and go back to AGX Board.

I know that I’m so…new to LINUX.
I apologize that my question is can bother you.

And one more question…
Is it real that I need to build kernel and configure device tree to use SPI SFIO??
I “just” want to send low level data to my TI MCU and receive data.
(like I send 0x55, 0x12, 0x34 and MCU return right answer about that packet)

Because in MicroControllerUnit world, configure resisters is the only things what I have to do.

Anyway, thanks to your super kind reply.
I’m really have this forum have so many kind people like you.
Thank again.

Sorry, this will be a bit at a time, it is hard to know where to start sometimes. It’ll seem easy after you’ve set things up and built a kernel once. Do know that “make” is a command on the command line, and this is run at the top level of the kernel source. Various arguments may be used, e.g., “make O=/some/where Image” would use the “/some/where” location to place output when building the target file Image. There are several uses of make O=/some/where ... in which the “O=...” is always the same, but the rest of the command changes…

If your kernel source is on the Xavier, then this is where you run the commands with make in their simplest form. If your kernel source is on the host PC, then you will need to also add some options for the cross compiler tool chain location and architecuture (if you see ARCH=..., then you are cross compiling, and there would also be a CROSS_COMPILE=...). Most of the instructions you see in the official documents are for cross compile from a PC. If you have kernel source on the Xavier, then it simplifies instructions.

Where is your kernel source (sorry, I know this is sort of what you were asking to start with)…is it on the Xavier (if so, what directory?), or are you cross compiling on the host PC? Which directory is your cross compile tool chain in (only applies if cross compiling from the PC)? I can give more explicit information if you provide the locations of directories where you have (or state if you are trying to find those locations, I can then say how to create the location or find the information):

  • Kernel source directory and if on the Xavier or PC.
  • Temporary output location of basic build.
  • Temporary output location of modules if you are building modules.
  • Temporary output location of device tree files if you are building device trees.
  • The result of the command on an unmodified Xavier uname -r.
  • Which L4T release is this for? See “head -n 1 /etc/nv_tegra_release”.

The following are some semi random details, not necessarily in a specific order…if you can answer those questions, then what follows isn’t too important. I add these to help with the above, and to add some details to the use of “make”.

Some comments on this:

  • You never use the “ARCH=” unless you are cross compiling. If you compile natively on the Jetson, then “make” already knows the architecture is arm64. If you are cross compiling, then you also need to include “CROSS_COMPILE=/some/where/to/alternate/gcc/prefix/name”. Native compiles on the Xavier do not require special tools (no “CROSS_COMPILE” when building on an Xavier for an Xavier).
  • Don’t use the “modules_install” step unless you’ve specified an alternate module output location (since the “INSTALL_MOD_PATH” was given in the example, then this works well when you name a temporary location, but the example is complicated via cross compile). The example they gave is to use on a host PC prior to flash such that the output goes into the sample rootfs directory prior to the flash.

When you have kernel source, then it needs to be configured prior to building. The configuration produces a “.config” file, and the directory with that file must be used consistently. Best practice is to name a build location outside of the actual source code so that the source code remains untouched; this implies the “.config” file is in the output location, which in turn uses the “O=/some/where” option. An example of native compile to a temp location without cross compiler:

mkdir ~/my_build
cd /where/ever/source/is
make O=~/my_build tegra_defconfig

…and as a result the location “~/my_build/” will contain file “.config”. From then on all compile commands would include “O=~/my_build”. If you fail to use the “O=”, then your config would be lost.

Or you could copy an existing config there:

mkdir ~/my_build
cp /proc/config.gz ~/my_build/.config
cd ~/my_build
gunzip config.gz
# Use an editor so you get this line edited:
CONFIG_LOCALVERSION="-tegra"

The reason for CONFIG_LOCALVERSION="-tegra" is so that the modules will be found. Before building a new kernel you will find that the output of “uname -r” ends with “-tegra”, and that this setting keeps “uname -r” the same even if you install another boot kernel Image. Modules are searched for at “/lib/modules/$(uname -r)/”.

You don’t necessarily need to use flash tools to install the Image file (depends on signing, which in turn may depend on which release). The location you will find the Image is within the “O=/some/where”. If you set environment variable “$TEGRA_KERNEL_OUT” to “~/my_build”, then this implies:
make O=$TEGRA_KERNEL_OUT ...
…and you would find Image via:

# This is the same as "cd ~/my_build"
cd $TEGRA_KERNEL_OUT
find . -name Image

What complicates this is that when you flash it uses the kernel on the PC at the location you mentioned: ~/nvidia/nvidia_sdk/JetPack_4.4_DP_Linux_DP_JETSON_AGX_XAVIER/Linux_for_Tegra/kernel/. This gets copied to the location “nvidia/nvidia_sdk/JetPack_4.4_DP_Linux_DP_JETSON_AGX_XAVIER/Linux_for_Tegra/rootfs/boot/” (file name “Image”), and then this is what gets flashed. If you are installing without flashing, then you can just directly copy Image to the Jetson in “/boot”, but you want to be careful to do this correctly or else it won’t boot and you’ll have to flash.

It is incredibly difficult to say exactly what step to do since the instructions vary depending on release, but you might try the following once you have built a new kernel Image file (which is relatively risk free…the following is on the Xavier, and you will need serial console to pick kernel at boot…and if it fails, it won’t hurt anything):

sudo -s
cd /boot/extlinux
# ...use any editor you want...edit file "extlinux.conf":
# Copy the block with "LABEL primary" and make a renamed duplicate block like this:

LABEL primary
      MENU LABEL primary kernel
      LINUX /boot/Image
      APPEND ${cbootargs} root=/dev/mmcblk0p1 rw rootwait rootfstype=ext4

LABEL testing
      MENU LABEL testing
      LINUX /boot/Image.testing
      APPEND ${cbootargs} root=/dev/mmcblk0p1 rw rootwait rootfstype=ext4

Note how any kind of label was replaced with “testing” in the duplicate entry, and the Image file name was replaced with Image.testing. So if you copy your newly compiled Image file into “/boot” with the new file name Image.testing, then the alternate boot entry will use that kernel instead of the default, and the default will still be there. If for some reason this does not work, then the default is still the same thing it always was and there is no risk of not booting.

Thanks to your help.

Now I restart build source code from my host PC.
(it is desktop PC with native Ubuntu 18.04 OS.)

So now, I’m done with process shown in below.

\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\

Building the NVIDIA Kernel

Follow the steps in this procedure to build the NVIDIA kernel.

Prerequisites

•You have downloaded the kernel source code.

•You have installed the utilities required by the kernel build process.

Install the utilities with the command:

$ sudo apt install build-essential bc

To build the Jetson Linux Kernel

1.

Set the shell variable with the command:

$ TEGRA_KERNEL_OUT=<’‘outdir’’>
–> change <’‘outdir’’> to /home/jhai/OUT

Where:

• is the desired destination for the compiled kernel.

2.

If cross-compiling on a non-Jetson system, export the following environment variables:

$ export CROSS_COMPILE=<cross_prefix>
–> change <cross_prefix> to /home/jhai/l4t-gcc/gcc-linaro-7.3.1-2018.05-x86_64_aarch64-linux-gnu/bin

$ export LOCALVERSION=-tegra

3.

Execute the following commands to create the .config file:

$ cd <kernel_source>
–> change <kernel_source> to /home/jhai/Jetson_Compile/Linux_for_Tegra/source/public/kernel/kernel-4.9
(Where I extract download source file)

$ mkdir -p $TEGRA_KERNEL_OUT

$ make ARCH=arm64 O=$TEGRA_KERNEL_OUT tegra_defconfig

4.

Execute the following commands to build the kernel including all DTBs and modules:

$ make ARCH=arm64 O=$TEGRA_KERNEL_OUT -j

Where indicates the number of parallel processes to be used. A typical value is the number of CPUs in your system.

5.

Replace <release_packagep>/Linux_for_Tegra/kernel/Image with a copy of:
–> change <release_packagep> to /home/jhai/nvidia/nvidia_sdk/JetPack_4.4_DP_Linux_DP_JETSON_AGX_XAVIER
(Finally, I found it!!! Special thanks to ‘linuxdev’!!!)

$TEGRA_KERNEL_OUT/arch/arm64/boot/Image

6.Replace the contents of Linux_for_Tegra/kernel/dtb/ with the contents of:

$TEGRA_KERNEL_OUT/arch/arm64/boot/dts/

7.Execute the following commands to install the kernel modules:

$ sudo make ARCH=arm64 O=$TEGRA_KERNEL_OUT modules_install \

INSTALL_MOD_PATH=/Linux_for_Tegra/rootfs/
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\

After this step 8 tells me below.

Optionally, archive the installed kernel modules using the following command:
cd <modules_install_path> tar --owner root --group root -cjf kernel_supplements.tbz2
lib/modules

OK. Here is the problem.

I Don’t know where is <modules_install_path>.
I replace <modules_install_path> to “/home/jhai/nvidia/nvidia_sdk/JetPack_4.4_DP_Linux_DP_JETSON_AGX_XAVIER/Linux_for_Tegra/rootfs”
(it is the same path as I used in step 7, INSTALL_MOD_PATH=/Linux_for_Tegra/rootfs/)

But next code is not working.

And also I don’t know about install kernel modules.
Because, in windows OS, install means that install program to system.
But … if you build the kernel code successfully on the host PC, isn’t this code supposed to be installed on a Jetson board? not in host PC?

I hope I can get reply.
Thanks A lot!!!

A caution prior to answering…in cross compile you would never use the default for the module install path. Even for native compiles on a Jetson I will suggest avoiding the default which is directly into the running system’s modules. The point of this variable (which you must assign to some random empty location to use non-default) is to see what is being built and installed to a clean and safe location rather than attempting to directly place modules.

The “<something>” just says it is an option you’d substitute with your own text. The case of “<modules_install_path>” just means substitute the path of that temporary location. The following is just one example of this:

cd ~
mkdir module_build
cd module_build
export TEGRA_MODULES_OUT=`pwd`
echo $TEGRA_MODULES_OUT
cd ~
mkdir kernel_build
cd kernel_build
export TEGRA_KERNEL_OUT=`pwd`
echo $TEGRA_KERNEL_OUT
cd /where/ever/you/build/the/kernel/from
# Do all of the build steps, we'll pretend Image and modules have been built.
# Then:
make O=$TEGRA_KERNEL_OUT modules_install INSTALL_MOD_PATH=$TEGRA_MODULES_OUT

Note that you make up a location for most temporary output before you start, and then name it like this (actual location can be anywhere you want which is out of the way and empty of other content):

mkdir /whatever/empty/location/kernel_out
export TEGRA_KERNEL_OUT=/whatever/empty/location1
mkdir /whatever/empty/location/module_out
export TEGRA_MODULES_OUT=/whatever/empty/location/module_out

…the only goal is that whatever that location is you’ve created it and it has no other content. These directories will eventually be deleted, or perhaps when you restart a build you will use the same location, but recursively delete content so old files don’t get in the way. Using “$” prefixed to an exported variable is just a convenience. You could simply name the actual path after “O=” or after “INSTALL_MOD_PATH=”, but you’ll make fewer mistakes if you use a variable for substituting and don’t need to retype the path each time.

Tip: After you assign a variable, use “echo $WHATEVER” to see if it is really set. Sometimes a typo implies the variable is empty, and that can result in disaster for some commands.

So, you means…

is folder for build kernel code.(after change device tree or pin-mux, etc. )

and

is path for ‘install kernel module’ from ‘kernel source what I built’ to Jetson?
So, I need to set ‘path to jetson board’ than ‘default for the module install path’ to install moduel from code

Am I right? or am I wrong?
If I’m wrong, It maybe means last step 8 is only for check below

So, if step 8 is only for check, Can you tell me how to reflect the changed kernel to jetson?

And install kernel moduel is last step of how to build custom kernel module?
I mean, after install kernel module and reboot my jetson then my custom kernel is loaded to jetson?
Use of custom kernel is next problem. I wan’t to understand how to reflect custom kernel to my Jetson.

Before closing this reply…
I really don’t understand about the process.
I’m so sorry because I know my questions are bother for someone who knows well about this.
And Thanks for kind answering with details. And now I’m on my way to learn.

Thanks for your interest.

Correct, TEGRA_MODULES_OUT is where “make modules_install” puts modules if you have set this optional location. The output location will mirror the directory tree where actual modules go in a running system, but the root of the tree will be in your location instead of the running system. There are all kinds of reasons you don’t want to accidentally overwrite a running system’s modules without looking first and knowing you are installing just the one module and not overwriting all modules.

And yes, TEGRA_KERNEL_OUT is where you’ll have temporary build content to keep your original source tree clean. Every temporary file can have an effect on the build configuration the next time you build, and if you have temporary content in the original tree, then you are in trouble. In the original tree you can run “make mrproper” to clean out most “edited” content, and from then on, use “O=$TEGRA_KERNEL_OUT” so the main source tree remains pristine. If you have doubts, then you can just recursively delete all content in “$TEGRA_KERNEL_OUT” (I typically delete everything except the “.config” there before I start a build).

Not sure which step is “step 8”, but if your change is to a module, then you just copy the module into your running system at the proper subdirectory to:
/lib/modules/$(uname -r)/kernel/
(the directory tree of the $TEGRA_MODULES_OUT will follow the same tree structure and this will hint where the module goes)

Unless you alter integrated kernel features you won’t touch the actual kernel. Modules are loaded at run time and the Image file has no need of change if a module is added. Integrated features require a copy of the file “Image” to the Jetson’s “/boot” (but there are some risks, and so you want to keep the original Image, and possibly make an alternate boot entry in “/boot/extlinux/extlinux.conf”, but the details change for some systems and I cannot give you a specific answer…you’d look for the docs for that release and Xavier combination).

One thing to consider if you are working on kernels is that you always want to start with the running system’s kernel configuration, and then make edits to that. The file “/proc/config.gz” is a reflection of the running kernel, and once the kernel changes, then you can no longer use this reliably. You should keep a copy of the “/proc/config.gz” somewhere safe on the host PC for future reference. In theory this is the same as “make tegra_defconfig”, but there are at times minor changes, and so copying “/proc/config.gz” is IMHO a superior starting config.


Since you are trying to understand some of this, here is the “gist” of what and where kernel files are…

The kernel Image file is the actual kernel, and the bootloader puts this in memory, and then starts executing this. This is process ID 0, pid 0.

The kernel is able to load optional pieces of itself from other files. Those are modules. The features directly in the Image file are “integrated” and can never be removed, while modules can be unloaded. If a feature is invasive to many parts of the kernel, then it won’t be available as a module. If a feature is used for booting prior to access to the filesystem, then the feature should also be integrated instead of being a module (e.g., you can’t use an “ext4” module to read the “ext4” driver since the module is already on an “ext4” filesystem). Some people use an initrd/initial ramdisk to get around this, but that is a complication you don’t want to think about.

Thus, the kernel is one big file, “Image”, and it is in “/boot”. The bootloader loads this. The Image then loads the modules as needed later on.

Whenever a driver loads, regardless of whether it is integrated or in the form of a module, it is passed arguments just like any other program or function call. Quite often some of the arguments come from the device tree. Typical arguments are to enable or disable features, or to tell the driver what hardware address is (it doesn’t work too well to set up a driver if the driver can’t find the hardware). Mostly the bootloader sets up a minimal amount of environment which the kernel uses during initial run, and any extra data is just ignored. Then, as modules load, modules see what is in that device tree and take data relevant to that driver as an argument.

Traditionally, a device tree would be loaded into memory at the same time as the kernel Image, prior to beginning execution of the Image. Then the tree would simply be there for whatever the kernel wants. Not so traditional is that early bootloader stages can at times want that same information, and thus in less common cases it might be necessary to load parts of the device tree prior to the kernel ever being in memory. You’ll find Jetsons in earlier releases did not require the device tree early on, but in newer releases it is quite common to require device tree content before the kernel ever loads. Bootloaders are in fact kernels, although their only purpose is to self-overwrite with a kernel, and so bootloaders have drivers and drivers need arguments passed to them.

One reason why there is a drive to provide some device tree setup in earlier boot stages is security, e.g., validating boot components are authentic before using them. If not for this I think device tree and bootloaders would be less complicated.

The forums are for asking questions and learning about both Linux and the Jetsons, so ask away if you need information.