I have a Jetson TX2 with an older version of Jetpack (3.X.X). I want to upgrade the Jetson to Jetpack 4.X.X. Is there a way to do this without completely wiping the drive? I want to keep all of my runfile installed software and deb file installed software. I remember when I formatted the Jetson the first time that I had to connect a host linux machine and wipe the entire Jetson hard drive. Is there a way to upgrade without wiping any user-generated data?
Generally speaking, no. The most recent release will use over the air update though, so if you go to that, then it is possible future updates won’t require this.
As a workaround, you can clone the rootfs first. That clone can be mounted, examined, copied, or edited on the host PC. The clone makes a nice backup even if you are not upgrading. When you clone, you get both a “raw” file (something “.img.raw”), and a “sparse” file (".img"). I throw away the smaller sparse file since it cannot be mounted/examined/modified, and only works for flash. The larger raw file is the same size as the partition, so you’re maybe talking 16GB to 30GB (depending on which Jetson you are working with).
For the below, this is from the “Linux_for_Tegra/” subdirectory to your flash software. If you’ve run SDK Manager, then for a TX2 it would be something like this (adjust for the JetPack version you use):
(P3310 is a TX2 dev kit)
Normally a fresh flash takes the sample rootfs folder (“Linux_for_Tegra/rootfs/” with NVIDIA drivers added on top of that, but otherwise being purely Ubuntu), and makes some “rootfs/boot/” edits. This then generates an exact partition image at “bootloader/system.img”. Incidentally, this also generates “bootloader/system.img.raw”, but the raw image is not used…if you were to move the raw image to the name of the sparse “.img” file, then that image would work perfectly, but the flash would take longer.
The rootfs content of one release is not compatible with another release (unless it is a minor bug fix release), but even if the content was compatible, then you’d be defeating your purpose and replacing the Ubuntu 18.04 content with Ubuntu 16.04…all you’d get are different boot stage content. Most of the time a mismatch with the boot stage content and the rootfs ends up not booting correctly.
Having your “backup.img.raw” from a clone would enable you to mount this on your host PC as if it were an actual disk drive partition. You could then copy relevant files to the “rootfs/” content and have that content become part of the flash (you’d have to first have the original “rootfs/” content put in place, and then add files on top of this). The generated “bootloader/system.img” would have your edits (this image is an exact image of “rootfs/” other than the boot related content edits).
The instructions for cloning may depend on the release used. Keep in mind that although you only want to create a new flash with the same JetPack/SDKM release that the clone was created from, that you can still clone from most any release…the cloned rootfs should be fine no matter what the release is you use to create the clone. So for example, you could not use JetaPack4.2 to flash a backup JetPack3.3 rootfs clone, but there would be no problem if you have only JetPack4.2 and create a clone of a Jetson which was originally from JetPack3.3. It is the instruction on how to clone which changes depending on release, and not the clone itself. In JetPack4.2 or 4.3 you should be able to clone with this command if the Jetson is connected with the micro-B USB cable and in recovery mode:
sudo ./flash.sh -r -k APP -G my_backup.img jetson-tx2 mmcblk0p1
This command will produce both “my_backup.img” and “my_backup.img.raw”. I’d delete “my_backup.img” and keep the “.raw”. If you need to store this, or protect it by using only copies, expect the large file size to take a lot longer manipulating than you normally think of for such operations. I compress my spares via “bzip2 -9 my_backup.img.raw”, or uncompress with “bunzip2 my_backupimg.raw.bz2”.
To mount this clone with loopback on your host you could do something like this:
sudo mount -o loop ./my_backup.img.raw /mnt cd /mnt ls cd - sudo umount /mnt
I tend to use this to create a minimal file tree of just the unique items I want to create in an “overlay” directory on the host, and then whenever I want to generate a new system, I just recursively copy the overlay onto the new “rootfs/”.
Be certain to always use “numeric ID” options and sudo when making such copies since the user IDs and numeric IDs may differ between the host PC and “rootfs/”. Also preserve permissions where needed. The “cp” command, if sudo, allows “cp --numeric-uid-gid …other options…”.
I have a couple follow up questions then
I have an SSD attached to the sata port permanently. Is there a way to avoid touching that drive when I upgrade the Jetson OS?
Is it worth it to also upgrade the OS from Ubuntu 1604 to Ubuntu 1804?
I am beginning to get the impression that I should just get a list of the software that I installed on the Jetson device such that I can then copy the home directory to another device. Actually, I think it would be smart to just copy the home directory to the SSD (it has 1 TB of space) and then just wipe the Jetson. How would you approach that process? I also have to say that that SSD is automounted through /etc/fstab, so I would like to preserve those settings as well. How can I get the list of installed software packages if I am assuming that I will just wipe the entire thing anyways? I now believe that completely reformatting the device is the best approach. Most of my software runs on 1804 except for ROS kinetic stuff. All of that software, however, is essentially equivalent between ROS kinetic and ROS melodic, so I don’t think that is going to be a problem. I am using basic ROS packages plus custom packages, and I have no major requirements to stick with ROS kinetic.
I know I went slightly off topic, but I want to make sure I take the safest approach that also allows for the most flexibility in the future.
An official answer for whether or not it is worth upgrading from 16.04 to 18.04 would depend on what you are doing. My own general thought is that I would personally only consider developing on 18.04 without question, unless there is some special case preventing this. The CUDA version is tied to the release, and you cannot progress in CUDA release without progressing to 18.04. If you must have the current 16.04 release’s CUDA version, then you cannot migrate to 18.04. I do not know if the ROS kinetic can or cannot work with 18.04, I’ve never used it.
With regards to the SSD, the mount point and content would change many things related to flash. In a normal flash the eMMC is flashed, and this would not “directly” touch the SSD, but it might change things related to how the SSD is used. The flash itself, with the Jetson in recovery mode, does not give flash access to the SSD. Even if you have a flash parameter naming an external boot device, then it is the metadata on the eMMC pointing to such external devices which would be modified. If it turns out your SSD is the root filesystem, then you cannot save the SSD, but you also must update the SSD separately (the flash itself would not update the SSD if it is a rootfs).
An SSD mounted to a home directory does not need to be changed (at least not in general). You would simply flash normally, and then tell the configuration (after flash) to mount the SSD on “/home”. The part to beware of is that much of CUDA goes into “/usr/local/”, and if your home directory has content referring to this, then that software may need to be adjusted for new CUDA release versions.
A Jetson clone would contain your entire “/home” content if and only if you have that content on your eMMC (Jetson clone tools pay attention only to the internal eMMC). You can keep a copy or clone of any “/home” SSD partitions as well if you wish, but this is independent of the flashing process (any partition can be cloned and saved as a file, e.g., with “dd”).
Clones of either the SSD or the Jetson’s internal eMMC can be loopback mounted on any Linux system and copy and examination can occur whenever you choose even if the original source is erased on the Jetson. This is a nice safety mechanism in that whatever you think you need prior to flash might not really be the whole story, but an original clone duplicate will still exist as if it was the original.
The “/etc/fstab” would be wiped out by flash. Prior to flash, on the host PC, you could put whatever edits you want into the “Linux_for_Tegra/rootfs/etc/fstab” and that content would be pre-updated the moment the flash runs. I would not necessarily recommend that as you might want to just have a full uncomplicated flash, and then edit the file after. A clone would be able to save the original fstab content for later examination if you choose.
Something to think about: If you have a clone of both the SSD partition on “/home”, and the eMMC of the Jetson, then both can be restored back to their original state. If you have cloned everything, then any examination you want to perform can be at any time you choose, even after the flash, and everything can be restored. The main problem with this is the enormous amount of disk space this might take on the host PC.
Do be aware though that if you back up individual files for later use (clones do not worry about this), then you need to be certain the files are saved with numeric user and group ID with the copy done as user root/suid (the numeric ID mapping to user name will differ between two Linux computers, and worse, non-Linux filesystems have no ability to save group and sticky bit and SUID information in any way whatsoever…only root can alter IDs or manipulate the ID of another user).
I have installed the main operating system on the eMMC chip, NOT on the SSD. The SSD is just a storage device on the Jetson. It does not contain anything to do with the rootfs. For all intents and purposes, it is like a permanently mounted thumb drive. It sounds like I just need to copy over the /etc/fstab file and the /home directory over to the SSD. I then turn off the Jetson, unplug the SSD, and then do the Jetpack 4.X installation. I would then complete the installation, turn off the Jetson, plug in the SSD, and then edit /etc/fstab to again automount the SSD drive. Do I have the process correct?
ROS kinetic is Ubuntu 1604 only, but the ROS software that I care about is already ported to the newest version ROS Melodic (ubuntu 1804). I was just mentioning that is the only major software package that does not stay the same if I upgrade.
The important part about ROS is that, since I have a Jetson Nano, I will need them to have the same ROS versions and Ubuntu OSs if I am to get them to talk to each other through ROS. CUDA versions and Jetpack versions do not affect this. Hopefully you better understand the context of my question.
Your procedure seems correct. I will make one caveat to beware of on this…your user name is mapped to a numeric ID. So is the user’s group numeric ID and alphabetic name. Your files on the saved SSD will probably match the generated user/group numeric ID if you have created any users in the same order, but this is not a guarantee (this only matters for users you add after the install, e.g., on first boot account creation or later).
I doubt user numeric IDs will be a problem, especially if you’ve added one account for your user on the original system, then add that one account to the new system as usual on a new install. If you have two or more accounts added on the old system, then be sure to add those accounts in the same order as they were previously added.
If you use “ls -l” on your SSD after it is mounted on the new install, then you should see user and group show up normally for ownership. If you see a number instead of a name for user and/or group, then your user/group IDs have changed for that user name. In that event the numeric IDs can be changed to match the new IDs and it will function normally, so just ask on the forum how to do that if for some reason numbers show up instead of names. This is usually a problem only on a system with several regular user accounts added in different orders.
You could save a copy of the original system’s “/etc/passwd” and “/etc/group” files for later reference as to the original name-to-numeric-ID mappings.
Yeah I only have the original user accounts when I first installed the Jetpack software: “nvidia” and “ubuntu”. I did not add new users. If I understand you correctly, I should therefore NOT have any issues with users and permissions on that SSD. Is this correct?
The short answer is that if you add accounts in order, then you are correct. In order would be to first add account “ubuntu”, and then to add account “nvidia”. First boot account creation would imply user “ubuntu” (regardless of adding the account on first boot or at a later date the result would be the same).
In the case of the earlier release which had users “ubuntu” and “nvidia” already installed the UID/GID of “ubuntu” was “1000”/“1000” and “1001”/“1001”, respectively. “1000” is the UID/GID of the first non-system user. Thus, if on the new system, you first add “ubuntu”, this user should match UID/GID of the original. Similarly, if on the new system, you add user “nvidia” second, then it should match. If you reverse the order, then the two will reverse which owns what.
For use of CUDA and GPU it may be that users may need to be manually added to group “video”. The official way of doing that, if you find CUDA or GUI not working on a user, would go like this (adjust for the account name, I’m just assuming account “nvidia” for an example):
sudo usermod -a -G video ubuntu
In a similar way, the “adm” group might be needed for use of sudo, so:
sudo usermod -a -G adm ubuntu
Or all three combined in a single command (note that the first boot account name would already be in group “adm”, so “adm” wouldn’t be needed for “ubuntu”, but would be needed for “nvidia”):
sudo usermod -a -G video,adm ubuntu
If you need that user to access serial devices, then it may be necessary to add to group “dialout”. For example, this would be useful on account “nvidia”:
sudo usermod -a -G dialout,video,adm nvidia