how to git versin rootfs

Hi,

I am trying to cross-install some lib into rootfs after getting nano sdk. defalut rootfs owner name and group are both root. then I have problem to git add these directory. if i change both ownwership to current user. after flashing system seems stuck at oem-config stage.

how to I track rootfs directory with git in this case.

You can’t change the rootfs permissions in most cases and still have a working system. If you provide more details on what you’re actually trying to accomplish, then likely a better solution can be examined.

hi ,

I am cross-installing some packages into rootfs direcotry on the host . I’d like to track them with git.

Thanks,

Tracking the files of specific packages would be far far easier then tracking the entire rootfs. If you were to do so, then what I’d suggest is to either clone the rootfs and work on a loopback mounted clone, or else to use rsync to put a copy on the host PC (being sure to preserve numeric IDs).

From that point rsync could at any date tell you if the Jetson files differ from the host’s files, and optionally update. Git would then work on the host side and not the Jetson side, but you’d still need to create and run your git as root/sudo.

To some extent it depends on what kind of tracking you want…just to know of a change, or to actually diff a non-binary file. Is there a reason you need to put the entire operating system into git? Can you instead place specific files into git?

Thanks.

currently I am preparing a final image on host side only. in the JetPack_4.2.2_Linux_GA_P3448/Linux_for_Tegra/rootfs i am installing different packages with qemu. the whole Linux_for_Tegra directory need to be managd by git to track any change.

the problem with git as root is that when i push to the remote i can not do sudo push.

for any change within rootfs i only want to know if there is any change not the diff . another example if I develop a service i also need to cp to rootfs before generating the image. so that’ why i am thinking to git rootfs directory.

Thanks

Imo git isn’t the tool and probably can’t be made to work here. Instead track your modifications and apply those to the current, stock, rootfs as linuxdev suggested.

Hi Mdegans,

What linuxdev was mentioned seems sync between Nano and host pc. I was talking about how to make host development evniroment correctly . so we need make sure everyone fetch the same code and the build generate the same image on the host.

I am just wondering what’s your setup. do you guys always develop on Jetson nano itself ? how do you collborate wiht other team members if you dont host whole sdk with version control.

we need host jetson nano SDK somehow and setup the cross-intall cross compiling on the host. then build server can generate a final image.

Thanks

I almost never develop on the nano itself. I use Jetbrains products personally to develop and debug remotely. Both PyCharm (Pro) and CLion work flawlessly with the Nano. As to how to ensure the SDK is the same on all machines, that’s something SDK manager is mostly supposed to ensure by auto-updating.

My suggestion to your rootfs problem would be to host a rootfs tarball on a network share, or download a copy from nvidia directly, each time you need to make a new image. Then you:

  • extract rootfs from network share
  • apply differences (which you track using version control)
  • build image

You can even have a script do that. There shouldn’t be a need to track anything other than the differences. Putting the entire contents of the rootfs in version control, and updating that every time a new rootfs tarball is released, will lead to an overly large git repository. Remember that version control will keep every version of every file. So if a rootfs is 8 gb, and you update that, your repository could be as large as 16gb, and so forth.

Hi Mdegans,

extract rootfs from network share
apply differences (which you track using version control)
build image

The problem for me is step 2. I can not apply these difference because I have to use qemu + chroot to cross-install the package on the rootfs directory on the host. whenever i exit from chroot. it already install libs and dependency everywhere in rootfs. that’s the reason I have to git control the whole rootfs.

As to how to ensure the SDK is the same on all machines, that’s something SDK manager is mostly supposed to ensure by auto-updating.

honestly, we are producing a few hundreds Nano device. I have to prepare the final image and burn them into sdcard. connect each of them to SD MANAGER will be very slow and inroduce the difference if sdk manager does the update control.

Thanks.

Note that any rootfs needs absolute preservation of numeric owner/group IDs, and git is not capable of this. It is the realm of “rsync” to detect filesystem changes (including user/group/permissions), download or update, and preserve permissions. “rsync” works locally or remote. “rsync” can simply list what it finds has changed. This is really what you want.

Your reference copy can exist on your host PC or a server so long as you can guarantee no accidental edits…any edit would probably be from a set of files you’ve developed and then copied into the rootfs reference copy. Cloning or otherwise starting with a current Nano rootfs could be considered a “seed” to start with, but after that you would use it more or less like a version control system for a filesystem.

There are many ways to do what you want via simple scripting of rsync. Flashing on command line via the “flash.sh” tool offers ways to handle this since flash.sh is itself a script which can be modified. Normally JetPack/SDKM would have unpacked the sample rootfs and run apply_binaries.sh to create the “Linux_for_Tegra/rootfs/” content, followed by a few minor edits to the “/boot” content during actual flash. Once that has been created each run of flash.sh ignores further unpacking of the sample rootfs, along with ignoring repeated apply_binaries.sh updates. There is no reason you couldn’t modify flash.sh to update the “Linux_for_Tegra/rootfs/” via some reference copy over rsync.

You really should experiment some with what rsync can do. Some random comments without any particular order are:

  • The "--dry-run" shows you what happens without committing to it really happening.
  • The "--numeric-ids" preserves numeric user and group IDs, which is critical when mixing things up.
  • The single character options can be combined, e.g., "-x" (don't cross filesystems) and "-c" (compress) can become "-xc".
  • You can get an idea of what is going on (or redirect to a log using "tee") through the "--info=progress2,stats2"...extra useful in combination with "--dry-run".
  • Certain directories would always be excluded, such as ".gvfs" and "lost+found" directories. Example: "--exclude '.gvfs' --exclude 'lost+found'.
  • Use the "-c" compression if doing a remote rsync, but skip "-c" for local "same machine" operations.
  • An example backup of a home directory to a contrived "/mnt/backup/" location:
    rsync -avcrltxAP --info=progress2,stats2 --delete-before --numeric-ids --exclude '.gvfs' --exclude 'lost+found' /home/ /mnt/backup
    
  • An example similar operation remotely over ssh, designed for nearly full systems in need of having files deleted before updates are added, performed on a Jetson and placed onto a host with "root" unlocked (non-update files are never touched):
    sudo rsync -avczrx --numeric-ids --exclude '.gvfs' --exclude 'lost+found' -e 'ssh' --delete-before / root@YOUR_HOST_IP_ADDRESS:/home/someone/rsync_stuff/
    
  • The same thing in multiline format with backslash trailing lines (hint: This makes it easy to remove or add "--dry-run"):
    sudo rsync -avczrx --numeric-ids --exclude '.gvfs' --exclude 'lost+found' \
     -e 'ssh' --delete-before \
    / root@YOUR_HOST_IP_ADDRESS:/home/someone/rsync_stuff/
    
  • Usually reversing source and destination, but using the same "other" options is the reverse operation.
  • The "--numeric-ids" implies you must run as root. Only root has the authority to deal with permissions which the user themself is not privy to.
  • For development systems I unlock root, set up ssh keys so I have non-password root login to the Jetson (but no access from Jetson to host PC), and then relock the root password. This results in ssh to "root@the_jetson" working without sudo, but root still cannot be logged into directly as root (only sudo style root operations and ssh to root works). Setting up keys is in fact a very useful way of setting up developers in a commercial environment. The only part here which is unusual is that of unlocking root on Ubuntu long enough to grant access by key. There is some root unlock info here: https://devtalk.nvidia.com/default/topic/1062269/jetson-tx2/cuda-remote-debug-error-amp-problem-creating-executable/post/5379922/#5379922

You should experiment with seeing how to extract, slightly modify and detect, and then restore updates through some small subdirectory which is fast to test on. For example, you could rsync backup the sample applications, and then modify something on the host PC, followed by detecting the change through rsync, or restoring the Jetson to contain the change via rsync. Perhaps there are more file types you wish to ignore, e.g., maybe the “.o” object files of a C compile, and you could experiment with that.

That’s a good way of installing packages if you can figure it out.

Can’t you automate the chroot and qemu step? Then you version control your dependency installation script. In other words, treat the rootfs folder as ephemeral. Each time you need to build a new image, you download a rootfs tarball, either from NVIDIA or (better) from a read only shared folder on your server. Then you install your dependences and software using your script (which you version control), build the image, and finally, delete the rootfs folder (to avoid any confusion or accidental reuse). There should be no need to version control the rootfs folder. You need only keep track of which version you build against.

Forgive me if I wasn’t clear. I wasn’t referring to updating the Nano directly using SDKM. However SDKM does update the “Linux For Tegra” folder and other development dependencies within so you can use it to make sure each development machine is up to date. If you need more assurance you can always copy the kernel, etc, the same way as the rootfs. That’s probably a good idea anyway as the kernel and rootfs version must be in sync (at least the modules)

My question at this point is: it almost sounds like you intend to have unique images for every Nano. If that’s the case, is that really necessary? Ideally you make one image and flash it many times. If you need to update it after that, I might suggest setting up and using an online apt repository. Debian packaging can be a pain, but once you figure it out, then all you need to do is set up unattended upgrades on your clients (this can be done on the rootfs) and when you push an update to your apt server the changes will start rolling out. Unattended upgrades now even staggers as of ubuntu 16.04 so all your nanos won’t be hammering your update server all at once.