Sample RootFS Origin

Could someone from NVIDIA provide some background on how the sample RootFS was created? Was it put together manually using chroot and endless makefiles or was Yocto used? I would like to recreate the sample rootFS so I can build on it. For obvious reasons I would like to avoid storing the entire rootFS in version control.

I have tried a few different recipes for building via yocto but there are some basic binaries missing and most of the packages use busybox and/or coreutils. The sample rootFS does not. I plan to try the core-image-full-cmdline image next. I am hopeful that will get me closer but would appreciate some history on the NVIDIA sample rootFS.

I couldn’t answer, although it is just Ubuntu 18.04. It is distributed as the sample root filesystem, which is 100% non-NVIDIA product. Then, prior to using the flash software, the “apply_binaries.sh” script is run with sudo. What that does is add some first boot setup scripts and the NVIDIA hardware drivers. None of the “optional” packages, e.g., CUDA, are added. This lives in the host PC’s “Linux_for_Tegra/rootfs/” directory, and is almost an exact match to the image which gets flashed.

The part which is not 100% is that when flash starts arguments to the flash software (on command line something like “sudo ./flash.sh jetson-nano-devkit mmcblk1p1”) make edits to some of the “rootfs/boot/” content. For example, it knows a dev kit uses a certain kernel and device tree, and adjustments to name that in the boot content is given.

I’ll suggest that NVIDIA might be able to provide more information on the sample rootfs, but that probably you have to understand the “apply_binaries.sh” script first since it is the key to any Linux distribution actually working with hardware support (everything else is just ordinary Linux and GNU licensing). The hardware driver “magic” is in “apply_binaries.sh”.

This is far from what you need, but if you were to directly unpack just the sample rootfs as root (it is available, just follow the L4T URL for your release), unpack with sudo, and make a list of files and permissions (that’s a rather large database, but is simple to do with the “find” command, and perhaps also “xargs” if listing more than file name…I’ll make suggestions on commands if interested). Then create a log of running “sudo ./apply_binaries.sh”, Finally, make another list of the files and permissions, and then a diff with the original list.

The reason I suggest like this is that some of the content is in the form of Ubuntu packages, and other content is not in a package. One has to separate out the scripted content from the package content in some cases. I haven’t looked at Yocto in a really long time, and was never proficient with it, but trying to make a recipe based on Ubuntu sounds painful. The key parts though (the ones which will make or break) would center on the NVIDIA content, and the content’s environment. Some of that is obvious, but in particular the Xorg X11 server must have that ABI for the GPU driver to load into it, and the same kernel with the same configuration would be required (or a configuration which adds features). Boot content from outside of the rootfs won’t matter, it’d just be the same content. Everything “Yocto” could be as you expect if that other content is considered.

One final thought: You’d also want to log one flash on command line to see which kernel and boot content is being picked at the last second. Example:
sudo ./flash.sh jetson-nano-devkit mmcblk1p1 2>&1 | tee log_flash.txt

Keep in mind that my example is for an SD-card model dev kit. An eMMC model would flash mmcblk0p1 instead of mmcblk1p1, and target would differ, e.g., jetson-nano-emmc or jetson-nano-emmc-devkit.

I’ll let NVIDIA comment on actual construction of the “pure” Sample RootFS (it is an interesting question).

1 Like

Silly me. There is literally a link to the RootFS source on the L4T archive page. Looks like the “pure” sample rootFS can be recreated using a combination of chroot, dkpg-source, dpkg-buildpackage, and the corresponding *.dcs files.

Thanks for the additional information. I have flashed many devices using the method you describe above. It’s all good until you need to have version controlled source, modified DTBs, and user land applications in a mass production environment.

I could probably use this archive to generate an inclusive yocto environment, but it would take a while. It’s not worth it. Will probably just store the sample rootFS as archive and write a script that builds on top of it.

I would probably do similar. FYI though, if you log the apply_binaries.sh step:
sudo ./apply_binaries.sh 2>&1 | tee apply_log.txt
…and then look up both the files added and the packages, you’ll be ahead of the game. The packages themselves have dpkg commands to list their contents. If you know which contents the NVIDIA packages provide, along with the directly copied content, then you’ll have an advantage. Be sure to pay extra attention to anything provided by NVIDIA which is for the Xorg X11 server, and preserve that release so that their driver will work with it.

Note: apply_binaries.sh is itself a human-readable script. Might be useful.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.