Load dtbo in Jetpack 5.0.2

I have a few questions regarding using dtbo in Jetpack 5.0.2

  1. What’s the proper way to load dtbo file? I tried to use OVERLAY_DTB_FILE in Linux_for_Tegra/p2822-0000+p2888-0004.conf and reflash. The OVERLAY_DTB_FILE already points to bunch of dtbo files. Shall I remove them and only point OVERLAY_DTB_FILE to my dtbo, or are they critical to AGX so I need to keep them and only append (or prepend?) mine?

  2. After system comes up, the /boot/extlinux/extlinux.conf still shows FDT /boot/dtb/kernel_tegra194-p2888-0001-p2822-0000.dtb. Shouldn’t FDT point to the new dtbo file?

  3. How to tell if dtbo file is indeed being loaded? “dmesg” doesn’t seem to mention the dtbo file.

  4. In the future if I update dtbo file, do I always need to re-flash, or can extlinux.conf be used? Is there something similar to FDT but for dtbo files?

hello user100090,

please check developer guide, To Create and Apply a DTB Overlay File for the steps to create and apply device tree overlay.
device tree overlay is designed to apply and update the default configuration while system boot-up. you don’t need to perform flash script to burn the partitions.
you should have dtbo files under /boot/, using Jetson-IO to select the configurations. then it should create another LABEL to specify a new FDT entry.

Quote from the link you mentioned:

Compile the .dts file to generate a .dtbo file. Move the .dtbo file to flash_folder/kernel/dtb/ before flashing.

#. Add below lines to the <board>.conf file which is used for flashing the device


So flash is indeed needed? What exactly does the flash do? I thought it would push the dtbo to /boot on the board, but that doesn’t seem to be the case. If I still need to manually copy the dtbo to board’s /boot, what’s the purpose of flash?

hello user100090.

you may push the dtbo file under /boot/, it’ll show the same results.

I have never used jetson-io on our custom board.

Just copied my dtbo file to /boot and tried sudo /opt/nvidia/jetson-io/jetson-io.py , the screen flashes, and then back to shell. No errors.

I also tried sudo /opt/nvidia/jetson-io/config-by-hardware.py -l, which gives:

Traceback (most recent call last):
  File "/opt/nvidia/jetson-io/config-by-hardware.py", line 125, in <module>
  File "/opt/nvidia/jetson-io/config-by-hardware.py", line 99, in main
    raise RuntimeError("Platform not supported, no headers found!")
RuntimeError: Platform not supported, no headers found!

When using OVERLAY_DTB_FILE list during flash, does it automatically merge all the dtbo files into kernel_tegra194-p2888-0001-p2822-0000.dtb? If not, what exactly does flash.sh do with the list of files in OVERLAY_DTB_FILE ? For people who don’t have custom dtbo files, since the default OVERLAY_DTB_FILE already has a list of included dtbo files, will they be auto merged into the default dtb, or shall we empty the list before flashing?

BTW, I thought JP 5.0.2 no longer uses plugin manager, but if I remove the plugin manager code, eg: Linux_for_Tegra/source/public/hardware/nvidia/platform/t19x/galen/kernel-dts/common/ tegra194-p2888-p2822-pcie-plugin-manager.dtsi, kernel will fail to compile. Am I missing something?

In file included from /home/user/workspace/xavier_35.1.0/Linux_for_Tegra/source/public/kernel/kernel-5.10/arch/arm64/boot/dts/../../../../../../hardware/nvidia/platform/t19x/mccoy/kernel-dts/common/tegra194-p2888-0004-e3900-0000-common.dtsi:27,
                 from /home/user/workspace/xavier_35.1.0/Linux_for_Tegra/source/public/kernel/kernel-5.10/arch/arm64/boot/dts/../../../../../../hardware/nvidia/platform/t19x/mccoy/kernel-dts/tegra194-p2888-0004-e3900-0000.dts:16:
/home/user/workspace/xavier_35.1.0/Linux_for_Tegra/source/public/kernel/kernel-5.10/../../hardware/nvidia/platform/t19x/galen/kernel-dts/common/tegra194-plugin-manager-p2888-0000.dtsi:17:10: fatal error: tegra194-p2888-p2822-pcie-plugin-manager.dtsi: No such file or directory
   17 | #include "tegra194-p2888-p2822-pcie-plugin-manager.dtsi"
      |          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
make[2]: *** [/home/user/workspace/xavier_35.1.0/Linux_for_Tegra/source/public/kernel/kernel-5.10/arch/arm64/boot/dts/Makefile:80: arch/arm64/boot/dts/_ddot_/_ddot_/_ddot_/_ddot_/_ddot_/_ddot_/hardware/nvidia/platform/t19x/mccoy/kernel-dts/tegra194-p2888-0004-e3900-0000.dtb] Error 1
make[2]: *** Waiting for unfinished jobs....
  CHK     include/generated/compile.h
make[1]: *** [/home/user/workspace/xavier_35.1.0/Linux_for_Tegra/source/public/kernel/kernel-5.10/Makefile:1391: dtbs] Error 2
make[1]: *** Waiting for unfinished jobs....
make[1]: Leaving directory '/home/user/workspace/xavier_35.1.0/kernel-build'
make: *** [Makefile:213: __sub-make] Error 2

In Jetpack 4.x, we use ODMDATA in p2972-0000.conf.common, along with plugin manager. Since Jetpack 5 no longer supports plugin manager, I assume ODMDATA is also not used? Shall I comment it out in p2972-0000.conf.common, before flashing? or is it still being used by something?

hi user100090,

Jetson-io.py it only works with DevKits.

hello user100090,

please share what’s the actual use-case to modify ODMDATA.

We do something special for our custom board. In JP 4, we have something like below in tegra194-p2888-p2822-pcie-plugin-manager.dtsi :

               fragment-pcie-xbar-2-1-1-1-2 {
                       odm-data = "pcie-xbar-2-1-1-1-2";
                       override@0 {

and set ODMDATA=0x1190000 p2972-0000.conf.common

In JP 5, we switch to use overlay


/ {
       overlay-name = "custom overlay";
       compatible = "nvidia,tegra194";
       nvidia,dtsfilename = __FILE__;
       nvidia,dtbbuildtime = __DATE__, __TIME__;

       fragment@0 {
               target-path = "/";
               board_config {
                       odm-data = "PCIE_XBAR_2_1_1_1_2";
               __overlay__ {

Is ODMDATA modification still needed in JP5?

Good to know.
Can you answer my other questions?


Yes, ODMDATA is still needed. Also, please contact with your direct contact in NVIDIA instead of asking this issue (changing ODMDATA) on forum.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.