sample rootfs

Hi,

i just got the TX1 devKit a few days ago. Doing testing etc. right now.

l4t_quick_start_guide.txt says:

2. Untar the files and assemble the rootfs:

   sudo tar xpf Tegra210_Linux_R23.1.1_armhf.tbz2
   cd Linux_for_Tegra/rootfs/
   sudo tar xpf ../../Tegra_Linux_Sample-Root-Filesystem_R23.1.1_armhf.tbz2
   cd ../
   sudo ./apply_binaries.sh

My question is:

Is there anything inside the Tegra_Linux_Sample-Root-Filesystem_R23.1.1_armhf that is needed by the apply_binaries.sh provided binaries/files/drivers/whatever?

Simpler, can i apply the script on a Ubuntu core image without any modification?

The sample root file system is a pure Ubuntu (GPL licensed) set of files. This includes things like the default video driver, which is the non-hardware-accelerated video driver. The apply_binaries.sh script unpacks into this, and in some cases adds new functionality (e.g., /etc/nv_tegra_release) or substitutes functionality (non-GPL files). Without apply_binaries.sh, the sample rootfs works for “standard” tasks, but for example could not work with CUDA.

I’ve used apply_binaries.sh using options to point it at an SD card on my host (the “-r” or “–root” option), as well as being able to directly unpack these to the “/” directory of a running Jetson.

Not sure if you mean “Ubuntu core image” as in some other root file system instead of the sample rootfs…if so it probably wouldn’t hurt anything to try it…whether the other rootfs is configured to use the apply_binaries.sh files or not I don’t know. More information is needed on what “Ubuntu core image” means.

About ubuntu core:

https://wiki.ubuntu.com/Core

... Ubuntu Core delivers a functional user-space environment, with full support for installation of additional software from the Ubuntu repositories, through the use of the apt-get command. ...

I am using the attached scripts to build the root filesystem. The question is: After applying the apply_binaries.sh script; Is something missing in my rootfs to fully use the TX1?

I am thinking of drivers right now. Don’t know what drivers from the ubuntu repos the TX1 uses.

The attachment did not attach, but the sample rootfs sounds like core…apply_binaries.sh step adds things which provide a hardware-accelerated version. If you build your own sample rootfs, I don’t know if what you build will use the hardware-accel versions, but chances are at least parts of it will (especially if it follows the Ubuntu 14.04 file system scheme).

I’d just suggest trying it won’t hurt…you can always clone prior to trying. I suspect the job of building the basic system is more difficult than it looks due to the 64-bit kernel and 32-bit user space (you would be best off starting with the nVidia kernel, modifying in place as needed under the L4T sample rootfs, and only then adapting to a new sample rootfs).

If you want to see what apply_binaries.sh puts in, just look at the driver package nv_tegra directory, and view the contents of the tar files (.tbz).

Building an Ubuntu rootfs isn’t that complicated if you know how to use qemu.

Let’s start with a 16.04 arm64 image.

First you’ll need to install qemu-user-static on your Ubuntu Host PC.

sudo apt-get install qemu-user-static

Download the rootfs image.

wget http://cdimage.ubuntu.com/ubuntu-core/daily/current/xenial-core-arm64.tar.gz

Note: xenial is currently beta, so don’t use it for production systems (as if you’re using your TX1 for anything besides development at this point).

Extract the archive as root because all the files need to be owned by root.

mkdir rootfs
cd rootfs
sudo tar -zxvf ../xenial-core-arm64.tar.gz

Copy the aarch64 qemu executable to the rootfs. You’ll need this for chroot.

sudo cp /usr/bin/qemu-aarch64-static usr/bin

You won’t be able to use dns without copying your resolv.conf file. The host resolv.conf will do.

sudo cp /etc/resolv.conf etc/resolv.conf

Now chroot. Don’t worry your system knows to run qemu now.

sudo chroot .

Now run apt-get as you normally would.

apt-get update
apt-get install ubuntu-desktop

I’m going to try to boot my new rootfs as soon as I get my board(which is currently on backorder), so I’ll let you know how it works out.

Next I’ll try a 4.5 kernel.

yes. thats how you would build a new filesystem.

the question is about if something is missing after/before applying the apply_binaries.sh script to that new filesystem.

i do it with the 14.04.4 core image and it works. but i just had the time to test simple connectivity via wifi/shh. will try to make the cam work next.

Dear all:

Had this problem been fixed?
I’m using R24.2.1 release package.And I’m thinking how to custom the Tegra package.

As Linuxdev said the apply_binaries.sh script using Nvidia tools to original rootfs. creating some symbolic link and copying some binaries to the rootfs.

This is my logs during run apply_binaries.sh script.

Using rootfs directory of: /home/zuoqiang/WorkSpace/CV2_TX1_flash/Linux_for_Tegra/rootfs
Extracting the NVIDIA user space components to /home/zuoqiang/WorkSpace/CV2_TX1_flash/Linux_for_Tegra/rootfs
Extracting the BSP test tools to /home/zuoqiang/WorkSpace/CV2_TX1_flash/Linux_for_Tegra/rootfs
Extracting the NVIDIA gst test applications to /home/zuoqiang/WorkSpace/CV2_TX1_flash/Linux_for_Tegra/rootfs
Extracting the chromium browser to /home/zuoqiang/WorkSpace/CV2_TX1_flash/Linux_for_Tegra/rootfs
Extracting the configuration files for the supplied root filesystem to /home/zuoqiang/WorkSpace/CV2_TX1_flash/Linux_for_Tegra/rootfs
Creating a symbolic link nvgstplayer pointing to nvgstplayer-1.0
Creating a symbolic link nvgstcapture pointing to nvgstcapture-1.0
Adding symlink libcuda.so → libcuda.so.1.1 in target rootfs
Adding symlink libGL.so → libGL.so.1 in target rootfs
Adding symlink libnvbuf_utils.so → libnvbuf_utils.so.1.0.0 in target rootfs
Adding symlink libcuda.so → tegra/libcuda.so in target rootfs
Adding symlink libEGL.so → libEGL.so.1 in target rootfs
Adding symlink /home/zuoqiang/WorkSpace/CV2_TX1_flash/Linux_for_Tegra/rootfs/usr/lib/aarch64-linux-gnu/libdrm_nvdc.so → /home/zuoqiang/WorkSpace/CV2_TX1_flash/Linux_for_Tegra/rootfs/usr/lib/aarch64-linux-gnu/tegra/libdrm.so.2
Adding symlink nvidia_icd.json → /etc/vulkan/icd.d/nvidia_icd.json in target rootfs
Adding symlinks for systemd nv.service and nvfb.service
Disable the ondemand service by changing the runlevels to ‘K’
Extracting the firmwares and kernel modules to /home/zuoqiang/WorkSpace/CV2_TX1_flash/Linux_for_Tegra/rootfs
Extracting the kernel headers to /usr/src in target rootfs
Adding target symlink /lib/modules/3.10.96-tegra/build → /usr/src/linux-headers-3.10.96-tegra
Installing zImage into /boot in target rootfs
Installing Image into /boot in target rootfs
Installing the board *.dtb files into /boot in target rootfs
Success!

I’m not sure of the exact question, but apply_binaries.sh is human readable shell script, so you can see exactly what happens. Mostly it just unpacks bzip2 compressed tar archives into the root file system. If you look in the driver package “nv_tegra/” subdirectory you will find three tar archives. The archive which corresponds to drivers is “nvidia_drivers.tbz2”; this file can be unpacked directly in the root of a running Jetson if repair is needed since this is the file providing content checked via this command:

sha1sum -c /etc/nv_tegra_release

If you are doing something risky and know drivers might be broken, then you could just copy nvidia_drivers.tbz2 directly to the root of your Jetson…and then unpack it whenever something has broken the system and those files will be restored.

There are other files from the other .tbz2 archives, but the one mentioned above is the one which is most at risk from bad updates. This is also the one which gives the hardware accelerated access to GPU and graphics.