Custom rootfs for TK1 based on L4T r21.5 example rootfs

I discovered that the rootfs of JetPack v.2.3.1 for the TK1 [url]https://developer.nvidia.com/embedded/jetpack[/url] and the L4T example rootfs r21.5 [url]https://developer.nvidia.com/linux-tegra-r215[/url] are not the same. There are some related questions for the TX1 like this one [url]https://devtalk.nvidia.com/default/topic/937671/?comment=4886298[/url] which points at a build procedure at elinux.org. I did not find any info for the TK1. How do I get to a rootfs like that in JetPack build from the standalone L4T example rootfs?

So far as I know the L4T sample rootfs for separate download is the only rootfs you can get without actually running JetPack. I couldn’t tell you what the differences are, but they are likely very minor.

Including the JetPack installation into our build process is problematic and overhead because we just need to beeing able to reproduce getting the JetPack example rootfs. Is there some patch which could be applied on the standalone rootfs to get the JetPack rootfs? Otherwise one could request a patch from NVIDIA.

I do not know of any patch. For a production environment I’d tend to suggest building a completely ready system, including any updates, and then cloning it. Command line flash can be used to put the clone on instead of a sample rootfs. That would mean using JetPack only once.

See:
[url]http://elinux.org/Jetson/Cloning[/url]

Note that a loopback mounted clone image can be updated with rsync, or customized just before its use (e.g., a file with a serial number could be incremented). Flash with a clone is also faster since it does not build the the system.img.raw or system.img.

Ok, I tried to avoid that dependency because we are using a SOM (system on module) providers toolchain, full image, flasher, etc. and if we want to use the JetPack Cloning mechanism we would have to merge the board vendors drivers, etc. into the the JetPack environment somehow, right?

I don’t know for sure (I’ve never used your environment), but more details might help (I think you can use cloning even in your situation, but some details may change).

In particular, the separate sample rootfs is somehow getting to your custom system…is this ever flashed onto the system using the flash.sh script from L4T? If not, what changes are made to the starting rootfs, and what tool is used to put it onto your board? Are there any other partitions which differ between your board and what the L4T flash.sh would use?

Tools exist to clone or restore any partition at all, one at a time, or even all in a single image of all partitions as a single binary data image (any custom layout could be cloned/restored in its entirety, it just wouldn’t be loopback mountable). One of the things which can change and disrupt use of clones is if one of the partition sizes differs and causes the hard disk layout (eMMC in this case) to change…e.g., you can’t plant a 2MB partition in the space of a partition which was originally 1MB, and if a partition uses a version of some binary image which another system component requires, and if more than one version was available, the two would have to be compatible versions. So what details can you give on how partitions were added (custom tools, whether you have a binary image of the partition, so on), and how those partitions differ from a standard L4T flash.sh install?

No, I use a script from the board provider to flash the rootfs onto eMMC (see hyperlink below).

The rootfs is that one included in JetPack without modifications (means it has the kernel and dtb in /rootfs/boot) and I followed the board providers description to put it onto the board http://developer.toradex.com/knowledge-base/installing-nvidia-jetpack-with-l4t-on-apalis-tk1.

Yes, I use a partition layout with one boot partition (mbr, u-boot), a rootfs A partition (kernel, dtb), a rootfs B partition B (kernel, dtb) and a data partition.

(Another point which could be of importance: I extract the rootfs to its full size on the board.)

I have an Apalis T30 sitting next to me at the moment…I don’t have the TK1 variant though, so I’m making some guesses. I also use a Fedora host, so I’m also making some other guesses. The following is a bit complicated, but I don’t know of an easier way to get the JetPack version of rootfs on the Toradex version since the rootfs for the JetPack version is not available as a tar archive. There is a “shorter summary” at the end of all of this, you could read that first.

Some possibly useful preliminary information: The L4T sample rootfs itself does not contain any proprietary nVidia drivers…it’s purely Ubuntu. Modification to become customized for nVidia essentially comes in two steps: First, the apply_binaries.sh script unpacks a number of files, and second, the flash process adds some boot configuration to the “/boot” directory (including kernel and dtb copies, with a copy of extlinux.conf naming which dtb is used). The files which are unpacked come from the “bootloader” subdirectory “.tbz2” archives, plus from the “kernel” subdirectory “.tbz2” archives. The Toradex procedure has some general theme compatibility, as they take a rootfs and overlay some files onto that rootfs to adapt to this particular board support. The source of the files to overlay is mainly what differs. If there are multiple file unpacks (overlays on top of overlays), then the overlay which occurs last wins.

During a regular Jetson flash a loopback mountable image is created which is the melding of several file unpacks: “bootloader/system.img.raw”. This serves as a source of the rootfs you are looking for, but has had files unpacked on it which are not in the Toradex version (apparently just the modules directory needs manual intervention, but I have no way to test). The Toradex procedures include steps somewhat similar in purpose to the above Jetson-specific procedures in that Toradex procedures are unpacking a rootfs, and then overlaying content onto that.

Note that the source of the rootfs for your case would normally be an image from Toradex. For the L4T install Toradex has you unpack this and archive a subset of the Toradex file system, creating a kernel module archive (mod.tar.bz2). The combination of this Toradex update/flash substitutes for the apply_binaries.sh and flash.sh steps. I believe that if you were to use a system.img.raw loopback mounted rootfs, which has all of the Jetson-specific files on it, and then overlay it with the same mod.tar.bz2, then the extra Jetson-versions of those files would not matter…the update and flash step from Toradex should overlay the rest of the boot files onto the image and only those would be used. There may be extra files left over, but using a JetPack system.img.raw instead of an unpacked sample rootfs should get exactly what you are looking for.

The shorter summary: If you get a rootfs partition from a JetPack install…either by cloning its rootfs or by using the system.img.raw, and substitute this under loopback mount for the unpacked L4T sample rootfs, then you should get the results you are looking for. This would of course require either first running the JetPack flash to a regular Jetson to get the system.img.raw, or a clone from a Jetson that has what you want on it (meaning it was flashed with the more recent JetPack). Once you have that image you can use the Toradex instructions.

An example if you have system.img.raw would be:

<b>sudo -s</b>
tar xjvf Apalis_TK1_LinuxImageV2.6.1Beta2_20161122.tar.bz2
cd Apalis_TK1_LinuxImageV2.6.1/rootfs
tar cjvf ../mod.tar.bz2 lib/modules
cd ..
rm -rf rootfs
mkdir rootfs
mount -o loop /wherever/it/is/system.img.raw rootfs
cd rootfs
tar xjvf ../mod.tar.bz2
cd ..
# <b>...do the rest of the Toradex flash steps...</b>
cd
umount /wherever/it/is/rootfs
exit

Most of the steps can be skipped after that because you have an image and a flash setup with the proper edits already in place…you’d just need to loopback mount the system.img.raw in place of rootfs.

Thanks for the detailed explanation.