How to increase APP size partition on internal eMMC 64 GB


I have been doing flashing of my customized kernel image, DTB file on the Jetson AGX Xavier Industrial unit.

We have total 64 GB eMMC memory where the root file system is mounted. But I see only 28 GB is allocated for “/” and rest of the partitions are tagged with different names/category. one partition section UDA has more space unused.

Please let me know how to increase my RFS “/” space from 28 GB to 54 GB out of total 64 eMMC memory.
How to change this in my partition layout file? before flashing.

Is there any other change to be taken care for this before flashing?

Please find the below attached partition layout file [flash_l4t_t194_spi_emmc_jaxi.xml.] and [jetson-agx-xavier-industrial.conf] file attached for reference.
jetson-agx-xavier-industrial.txt (3.5 KB)
flash_l4t_t194_spi_emmc_jaxi.xml.txt (42.3 KB)

My Jetpack version is 5.1.2

Apply this patch and flash again with the -S flag specified:

--- a/flash_l4t_t194_spi_emmc_jaxi.xml
+++ b/flash_l4t_t194_spi_emmc_jaxi.xml
@@ -690,7 +690,6 @@
         <partition name="RECNAME" type="kernel">
             <allocation_policy> sequential </allocation_policy>
             <filesystem_type> basic </filesystem_type>
-            <start_location> 0x70C100000 </start_location> <!-- aligned to 0x100000 -->
             <size> RECSIZE </size>
             <file_system_attribute> 0 </file_system_attribute>
             <allocation_attribute> 8 </allocation_attribute>

Thanks for the reply.
Before that I want to understand the changes and proceed.

Let me know, my understanding is correct.

  1. Here we need to comment this line
    <start_location> 0x70C100000 </start_location>

in the partition layout file. What exactly this does?

  1. -S flag
    Details is as shown below. Here we are just specifying “-S” in the flash command, but not specifying any size. How does it work exactly. Please clarify.

  1. The size of the APP partition cannot exceed a certain limit, as specified by the line. This is to support OTA from rel-32, so if are upgrading from rel-32, you can safely remove the limit.

  2. -S XGiB

Ok. But I see the paritition name here as “RECNAME” instead of “APP”. please explain this discrepancy. My version is 35.4. so i can safely comment this line.

so my command to flash would be as shown below to create root file system ‘/dev/mmblck0p1’ —> “/” of size around 50+ GB approximately.
Please confirm.

what happens to the rest of memory left on the eMMC will they take the space accordingly as assigned in the parititiona xml file. Please clarify for my better understanding.

sudo ./ -S 50GiB jetson-agx-xavier-industrial mmcblk0p1

  1. You don’t really need to know the detail. Please just delete it.

  2. YES

Just want to know is there any method to increase my memory space in the root “/” directory without re-flashing.
Existing memory for RFS “/” is 28 GB allocated from the 64 eMMC memory.
Can we expand it more, as my “/” directory is almost getting fulled and no free space available.

But I see many free memory available in the other partitions of the eMMC memory like '/dev" “/sys/fs/cgroup” as in the snapshot in my first post.
Please let me know.

Please just follow what I said earlier in the post.

If you want to add space to a specific mount point other than /, then you do that with another device and no flash. It is just at that mount point though. For example, you could put /home or /usr/local onto another device after migrating that content to the new device. Not all mount points are valid using this, but often the bulk of space is consumed in a user’s home directory or /usr/local, so maybe this would work.

/dev and /sys are pseudo filesystems. They are not real filesystems on disk, they exist only in RAM and are the result of kernel drivers pretending to be files as a method of talking to the outside world. You are interested in ext4 space. Check and compare the general listing of filesystems to that specific to ext4:

df -H -T -t ext4
df -H -T

Did not understand this what do you mean by another device?
you mean we need to move the directories /home and /usr/local to external memory like SD card or NVM memory on the unit? so that the '“/” directory memory space is expanded .

let me know the correct steps to migrate the contents to another device and later put ‘/home’ and ‘/usr/local’ directories onto another device.

I am worried about the data inside ‘usr/local’. wont it effect working of other applications dependent on this directory ‘usr/local’ after moving to different external device?

At this stage where my entire unit is working fine and about to be released dont want to take risk experimenting with folders moving!!

OK. Thanks for the info.
Will try this commands later and see what it has to offer for me.

Any persistent storage device is what I meant by “another device”. It could be a USB or SATA drive of any type, it could be a second SD card, it could be network attached storage. Doesn’t matter. Generically, just substitute “another hard drive”. You are correct that the gist is that you can move content from non-critical places onto an SD card or NVMe, etc., but that storage is not available to the entire system. That storage is limited to the mount point or its subdirectories.

As an example, a lot of CUDA code is under “/usr/local”. You could move that to other media, mount that media on “/usr/local”, and the eMMC would no longer have to store that content. eMMC content would be freed for other use. Or your home directory can be transferred to an external drive, and then that drive mounted on “/home”; the result would free up eMMC. The latter is a good choice if you compile under your home directory and compiling is what makes you run out of space.

A concept to understand is that any partition which is properly formatted can be mounted almost anywhere. Whatever the content is underneath that mount point on the parent device is still there, but not visible until you unmount the extra media that you mounted over it. A bit like taping a page over a book’s page with corrections on the part on top…you can’t see what is underneath it, but it is still there. If the location is non-critical, then it means you can move data to the new device, and if the device fails, the operating system will still function.

You can in fact leave part of data on the original mount point as a backup. I can give you examples to work with, but I need to know what other storage devices you have are. For example, can you experiment with a USB external hard drive? Or perhaps you have an m.2 NVMe installed? What is it you can use for testing?

On my main computer I have “/home” on its own RAID partition. Naturally, the “~/nvidia” takes up a lot of space (and I am almost out of space, I’m at the point where it is difficult to flash now). I created this by taking the new Ubuntu “/home” (before I added NVIDIA content) and copying it to the RAID partition. I then mounted this on “/home”. Then I started using space for larger apps like the NVIDIA flash software. Because I mounted a copy of “/home” via a RAID partition, that extra space is consumed by the “~/nvidia”. If I were to unmount (the command is actually umount) the RAID array, then my “~/nvidia” will disappear until I mount it again. The other content will be restored. Every once in a while I can mount my /home on a temp mount point and copy the more critical parts to my device on / (but within /home where /home is a subdirectory of /).

One of the reasons I do this (other than being out of space and finding home to be critical) is that I can use the same /home content dual booting between Ubuntu 18.04 and 20.04. The partition for 18.04 is independent of the 20.04 partition. I can have both, and boot to the same /home.

The gist is that it is much more difficult to change / to a bigger or alternate device. It is trivial to take a non-critical sudirectory and move that content onto a second device that leaves the original root (“/”) device untouched.

Pls provide sample commands to move folers from one device to another.
I know basic ‘mv’ command.

I am not much aware and adversed about mount command. PLs provide some sample commands for the same.
I understand , if we move /home directory from my internal eMMC directory within"/" directory to M2.NVM memory or 64 GB external SD card that I have on my Jetson AGX Xavier device, that may not effect other application or system functionalities.

where as if we move the /usr/local folder where my majority of CUDA Jetpack installation is there, will it effect any application development or other supporting library, kernel loadable modules etc which are residing in other folder in the internal eMMC memory under “/”.

ok. Understood. Please provide sample commands to move and mount my /home and /usr/local directories from internal eMMC to external 250 GB M2.NVM or External 64 GB SD card memory.

ok. understood the theory difference behind just mounting the folders on other devices and physically moving the data to the other drives ( persistent external memory storage devices).

I have M2.NVM memory of 249 GB and 64 GB external SD card.

SOrry, this RAID concept is bit new to me and I could not fully understand this.

ok. Though it looks bit tedious job and complex to understand for people who dont have myuch deeper knowledge of parition, dual booting, mount/unmount, RAID and all…

ok. understood.

@linuxdev I have one more questions out of this topic. Please answer if possible.
I need to take a complete backup of my customised BSP package( source, sample RFS, tool chain, etc folder which i have modified) to another hard disk/pen drive.
I had tried using “cp- RF” command but some of the softlinks , hardlinks, gave errors and could not be copied \to another pen drive.
The total size is around 80 GB. Let me know any proper proven method where we can take backup of this entire BSP package folders. THanks.

Keep in mind that you always want to start with copy until you know it works as expected. mv will remove original content. If you know what you are doing then mv is a good choice in some cases, but just copy to start with. There is more than one way to copy, but backup and restore tools are the best unless you are copying an entire directory. If you use cp recursively with the name of a directory, then all device special files, symbolic links, permissions, and other attributes, are preserved in the destination. As soon as you copy is something other than the name of a recursive directory copy, you should consider rsync (but recursive copy is sometimes ok for that too if permissions are preserved). rsync is the king of “always works”, and can even be used over ssh to a remote system.

First, be certain that what you migrate is not needed for boot. That eliminates these directories:

  • /
  • /usr
  • /lib
  • /usr/lib

There are others you cannot do this with, but the typical ones are:

  • /home
  • /usr/local

For more details information I’d like to know what the device type is. NVMe naming is different than SATA; SATA can differ over USB. It sounds like you will use a thumb drive. Thumb drivers are hot pluggable, so if you monitor “dmesg --follow”, and then plug in the thumb drive, what logging shows up as a result of that?

Now provide a block device description with the thumb drive (or other destination media) connected, and post that here:
lsblk -f -p
(you can create log of that command and attach the log via “lsblk -f -p 2>&1 | tee log_lsblk.txt”, or else just copy and paste it to the forum, but if you copy and paste be sure to mark it as “code” with the “</>” icon so it preserves whitespace)

Also, tell me where the largest chunk of space is being consumed which you want to migrate. I’m thinking either /usr/local or /home. If you go to that location, then get the total space like this at that point and post it:

cd /usr/local
sudo du -h -s .
cd /home
sudo du -h -s .

(you could pick some other location, but we want to know if (A) it is feasible, and (B) if it is useful)

Thanks for the info. i tried using this rsync command instead of normal copy for one sample folder , but it gave some message saying " skipping …" and did not copy the folder.
However will try once and let you know the outcome later.

Ok. You mean /home and /usr/local directories are not required for the boot and we can move them to a different external drive.

Currently I am working on different functionality. I will provide this data later.
However, with the information I have I can tell this nvm memory is external add on memory card inserted on the M2 slot on the carrier board. Its not hot plugin type.

This is a lot on backup/restore (which I think more Jetson owners should consider before their system needs flash). Just get an idea of what method you want to use, and I will add details specific to that method.

I should add some detail regarding recursive copy versus rsync. For this I am going to pretend that your external media is “/dev/nvme0n2”, and that the partition you are destroying (we are syncing outside content, what was there previously is expected to be destroyed…don’t do this if you don’t want to lose existing content on that partition) is /dev/nvme0n2p1 (which is the first partition, p1, of nvme0n2). This could just as easily be some other partition, e.g., “/dev/sdb2” (which is the second partition of the second SATA device).

If you recursively copy a directory, then its content is like the inside of a binary file…that too gets copied verbatim. As soon as you use copy to name files, this all changes. Suddenly there are file permissions, interpretation of symbolic link, device special files, so on. Directory “/” is just a single file with content. If you look at a subset of “/”, you can call it another file or directory, or you could call it binary content of an image. It is a matter of perspective. Backup and restore methods have different viewpoints.

Using dd implies everything is binary content. There is nothing special about files owned by a particular user, nothing special about executable content, and nothing special about symbolic links.

Recursive copy of a directory, and not naming files, is almost a binary copy (verbatim). What is inside the directory is verbatim unmodified binary content if the user has permission. The directory itself still has to have interpretation of user, group, read/write/execute permissions, so on. You can tell cp to preserve permissions, but in reality a copy command of a directory (naming the directory only, not its content) is a binary bit-for-bit copy aside from the directory name itself having attributes.

Let’s say you’ve mounted an ext4 partition “/dev/nvme0n2p1” at “/mnt”. This is a device boundary (useful to know later on). Some commands can be told to not cross device boundaries. If we copy anything to “/mnt”, this means it ends up on /dev/nvme0n2p1. If we use commands such as cp or rsync, then we need this mounted.

If we use a purely binary tool, for example, dd, then we do not mount the device if it is the source of data. Sometimes we don’t even mount the device which is a destination. dd can write a partition directly, or it can create a file on a partition. dd of a partition to /dev/nvme0n2p1, without /dev/nvme0n2p1 being mounted, would create a partition on the NVMe. That partition would have the same UUID, same content, same size. We’re not doing that, but it is useful to understand. When we flash the Jetson, it is a binary command and directly creates partitions…there are no filesystems.

If we want the simplest of tools, and if we have a mounted /dev/nvme0n2p1 on /mnt, and we want to copy all of “/home”, then it is easy. This would create a complete backup of “/home” there:
sudo cp -adpR /home/mnt/

The result is that every file of every type, including the name “home” itself, goes to “/mnt”, and “/mnt” is your NVMe. You wouldn’t directly mount this on “/home”, it isn’t a binary partition, but it is useful towards reaching that goal.

We can do similar with rsync, and although more complicated, rsync is much more flexible and much more powerful. There are variations to do this over ssh to a remote host…you could do this from the Jetson to a host PC’s drive via the gigabit network if you wish.

Again, assume you have “/dev/nvme0n2p1” mounted on “/mnt”. Now you will back up everything in “/home” in a way similar to the recursive cp, but this is not going to be quite the same…there are intentional differences (note that I am using the “--dry-run” option…a very powerful option if you want to see what would happen without actually doing anything…remove --dry-run to actually do this):

rsync -avczrx --dry-run --numeric-ids --delete-before /home /mnt

Go ahead and mount your ext4 NVMe partition on “/mnt” and run that command. It uses --dry-run, and so you will see log messages as if it were really running, but it won’t actually run.

If you keep your backups and restores to one system, then you don’t have to have the --numeric-ids. cp isn’t really capable of reliably doing this, but every account in Linux is really a number, and the name on the account is just an alias. A backup with user name owner “nvidia”, when copied to another computer that has different accounts, will break this. If there is no account with that numeric ID, then only the number shows up. There are cases where missing names or name differences across systems results in a translation and alters the original data. Thus, a true backup requires --numeric-ids (not always needed, but it is still a good idea).

The “-avczrx” is just overkill that works in all situations. One of the options is for compression, and this isn’t needed on a local system, but it also doesn’t hurt. In the case of backing up over the network compression is good to have. It is ok to use this in general.

rsync can perform direct backups, or it can perform incremental backups. The first time you back up it is always the entire content. Later on, if a few files have changed, any attempt to use cp will recopy everything…even things not changed. rsync would check for change and only copy what has changed. The larger the content, the more this helps (especially over a network, but it also helps locally).

The --delete-before removes a safety. If you are performing an incremental backup, the default is to not destroy the previous content until certain all update content is present. Should there be a power outage or a network failure (for remote backup), then not using --delete-before prevents losing the content during those circumstances. On the other hand, --delete-before prevents requiring two copies of data. If you were backing up and had a file which is 20 GB in size, then without --delete-before your destination disk must be 20 GB larger than what it actually contains in the end. The --delete-before is what I use because (A) I don’t expect a failure during update, (B) if my backup dies, or at least one file dies, I still have a copy, and (C) I’m really almost out of disk space (it is hard to flash newer Jetson with larger partitions because of this). My default is “--delete-before” to save space.

Let’s look at a variation: Mounting a backup device on “/mnt” of a separate host PC, and then backing up from Jetson to host PC. For illustration, the backup device partition is “/dev/sdb1”. I’m going to simplify by saying we are running as root. You can also unlock root on Ubuntu, set up ssh keys, and then relock root login via password on ssh, but still have key access (which is very useful). The host PC name will be “master” on the network (perhaps we’ve put this in “/etc/hosts” as an alias for the host PC on the Jetson):

# On master:
sudo mount /dev/sdb1 /mnt

# On Jetson:
rsync -avczrx -e ssh --dry-run --numeric-ids --delete-before /home root@master:/mnt

Both methods create a “home/” on the device mounted on “/mnt” (one case the /mnt is on the Jetson, the other case is the /mnt is on a different host PC).

Working entirely locally implies you don’t have to worry about sudo, it’ll just work.

A summary of some methods:

  • Recursive cp of a directory (not file names, rsync -avczrx --dry-run --numeric-ids --delete-before /home /mntbut a directory name) is simple. This works for copy of “/home” or “/usr/local”.
  • rsync is more powerful. This works when naming specific files as well as directories. This works across different systems and works correctly even for more exotic file types. rsync is less intensive than cp if this is an incremental backup.
  • dd and cloning are almost the same thing, but clone is better since the system is offline and shut down during the clone. Clone has no issues with files changing during clone.
  • Your “/usr/local” is larger than your “/home”. That directory has over 4 GB of content. That directory includes the CUDA development tools. So this is a good candidate.
  • Your “/home” might be where you compile software. Imagine you are compiling a kernel. In that case the /home might actually be larger than /usr/local (at least temporarily). So although smaller, if you intend to work on a large program, and you develop from /home, then /home is a better choicde.
  • You could have partitions on both “/usr/local” and “/home”.
  • Once you have this backed up we can set it up so that it can be mounted over either /home or /usr/local. If it works, then most of the original /usr/local content can be deleted to provide more space on the eMMC. In the case of /home you might delete some content, but leave basics in place so that you still have home content when the external disk is not there.

Mesh while could you pls reply to this thread

I am not able to take backup and image file not getting copied to image folder as per back and restore steps provided in the readme file.

Backup and restore tools tend to imply you have the rootfs (the “/”) as external media, perhaps with initrd. Someone else would have to reply regarding the backup/restore tools since I always do this directly. Maybe @WayneWWW or @KevinFFF could reply relative to the README_backup_restore.txt instructions.

My instructions above are not to create a full image. The rsync information is about backing up the content in a partition, which could then be used to create a partition on any media. Clone is specific to a partition itself, and that is what those tools do. If you are speaking of just a given subdirectory (e.g., “/home” or “/usr/local”), then the backup and restore tools are the incorrect method for doing this.

My rootfs is on internal eMMC memory mmblk0p1. if we need to modify anything to make the back up and restore tool work.

Note: first I will be performing the back up and in next step I will be performing the restore as per the steps provided in the read me file.

I understand that.

Also I find below another link which says different steps to take back up and restore

Which one is applicable for my AGX Xavier industrial version version 35.4.1

Since this is purely on eMMC, then I suggest just a simple clone to get the original image (@WayneWWW or @KevinFFF might have a way to enlarge the partition live). Clone will give you a raw clone (the exact size of the partition), plus a sparse clone (faster for flashing, but not as useful). Backup and restore is far different when external media is involved, so be careful when looking at threads on that topic to just look at purely eMMC solutions.

The clone command in the last URL you gave is correct. The Jetson must be in recovery mode. However, the target hardware might change (with jetson-agx-xavier-devkit being an ordinary dev kit). If you go to your “Linux_for_Tegra/” directory, check these files:
ls jetson*.conf

Do you see one which matches your industrial hardware? If so, then the target is the name of that file without the “.conf” suffix. Simply replace jetson-agx-xavier-devkit with that.

I want to note a possible complication, but this is more along the lines of restore than it is about clone. If you have an industrial model, then you also have a third party carrier board. The boot chain and software that is equivalent to what a BIOS would do will differ most of the time. As a result there will be a different device tree. The manufacturer of the third party carrier board would have to provide that. The rootfs you are cloning won’t care per se, but if you restore this system, and reuse the clone, then the rootfs would be correct, and whether or not the other boot software is correct will depend on if you’ve used that third party’s flash instructions (adjusted to reuse the clone).

Thanks for the information.

I was able to successfully backup and restore the entire eMMC( not just the APP cloning as I mentioned the earlier thread ) using the README file present in backup&restore folder under tools.

This command worked;

sudo ./tools/backup_restore/ -e mmcblk0 -b jetson-agx-xavier-industrial

I had given “mmcblk0p1” instead of “mmcblk0” earlier thats why earlier, it was failing to create Image files under Image directory and command was failing.