It seems that since the last few L4T releases, the device-tree is being used by uboot and other parts of the boot process. And hence instead of using the extlinux.conf to define which device-tree to use, the newer releases seem to be using device-tree thatâs flashed into a special partition (mmcblk0p13) which is then loaded at boot and used as device-tree firmware.
And since L4T 28.x this is now true for both TX1 and TX2.
This is all fine and dandy, except that there is no tool to update the device-tree from the TX1/TX2 itself. One needs
A separate x86 machine
Ubuntu 16.06 installed on it
Jetpack release
Before the appropriate âsudo ./flash -k -r DTBâ command can be invoked.
This is backwards!! If nVidia is trying to push for industrial usage (TX2i) of the TX1/2 modules, there needs to be a mechanism for them to self-provision without a need for an external x86 machine.
Weâre in the era of ARM machines. nVidia ship full desktops on this TX1/2 OS builds and yet why is there is a dependency on an EXTERNAL X86 machine to update the device-tree.
And itâs not that updating the device-tree is something rare and not needed by many people. Just looking through the forum there are so many posts about using SPI which requires a device-tree change. Jetson/TX1 SPI - eLinux.org
So this is a request (Iâm sure many others will agree with me) to nVidia to provide a tool to update device-trees locally from the TX1/2. This shouldnât be that hard if the device-tree is located on a separate partition, the tool would just need to update the partition after verifying if the device-tree is valid.
since R28.2, device tree partition need an encrypted DTB,
thereâs encrypt process in the bootloader, cboot would add some key entries before passing to u-boot.
youâre able to perform DTB partial update with the following command,
I will investigate the tegraflash.py command. I hope it doesnât require a binary to actually do the signing. I really hope I can run tegraflash.py on a TX1/2.
As I suspected tegraflash.py seems to call tegrarcm internally. And tegrarcm is a binary compiled for x86, not for ARM. So you canât run tegraflash.py on a TX1.
file tegrarcm
tegrarcm: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), statically linked, for GNU/Linux 2.6.8, not stripped
So my request stands. It would be really handy to a mechanism for updating DTB without needing an external computer.
would you like to re-compile device tree on Jetson platform?
you may take a try with below:
how about using commands to convert DTB file into a txt file for editing, and convert it to another new DTB file.
for example,
to disassembler the dtb file into txt file for edit
$ dtc -I dtb -O dts -o temp.dts tegra.dtb
to convert the DTS into a DTB
$ dtc -I dts -O dtb -o output.dtb temp.dts
after that, overwrite the device tree with the âddâ commands, performing warm-reboot to make the dtb modification takes effect.
thanks
Jetson-TX2 indeed need an encrypted DTB, it needs a host machine to generate signed files with tegraflash.py
we did not verified those steps in comment #5, looking forward of your testing result.
thanks
Hopefully, Nvidia will take this as a feedback. It would very very useful to be able to do this without a host machine. Especially when we have to setup a large number of TX1s, this doesnât scale!
I am trying to free-up the serial console UART so I can use it as a GP UART. Just out of curiosity, I wanted to check if the dtb encryption tool had been ported to run on Nano/ARM, but no problem - I will go about it using a host machine.
I have figured out the format of the signature header and created a python script to sign DTB files and possibly others. I have uploaded the code at [url]https://github.com/kmartin36/nv-tegra-sign/[/url]. It currently generates identical signed files to those made by flash.sh but without needing any x86 binaries. I cannot guarantee that it will not be broken by a future update, but it works with jetpack 4.2 and the TX2. I have not tried the Nano or TX1.