2G of partitions space is lost

Hello, We have over 20 nano devices, recently found the partition space is lost 2G:


As this picture, in fact, just have 11G, but we found 13G, closed to full, now have 2 times in different device,
so I checked dmesg, found some text:

[ 1.920693] EXT4-fs (mmcblk0p1): warning: mounting fs with errors, running e2fsck is recommended
[ 1.922223] EXT4-fs (mmcblk0p1): recovery complete
[ 1.922230] EXT4-fs (mmcblk0p1): mounted filesystem with ordered data mode. Opts: (null)

Device basic info:
jetpack 4.4
ubuntu 18.04
kernel 4.9.140-tegra

Is someone have got same problem?

I couldn’t tell you where the space went, but the errors are from an improper shutdown. Had it locked up and had to have the power pulled? Or has power been removed while it was running? The particular error seemed to be within the journal limits, and so although you might have lost some files being written at the time of power removal (there is almost always some sort of temp file being written), it is unlikely to be significant in size unless you were purposely writing a very large file. How do you shut these down?

Most of the NVIDIA development content is in “/usr/local”, so if this has a high storage content amount, then I’d say this is more or less “normal”. What do you see from:
sudo du -h -s /usr/local

Hi, thanks for your reply,

$ sudo du -h -s /usr/local
221M /usr/local

So can you give me some advice to avoid this problem?

There is more than one problem, and I think the improper shutdown is unrelated to the space consumption. For improper shutdown the fix is to always shut down normally without cutting power (and sometimes shutdown might be misleading if it is only “sleep” instead of actual shutdown…sleep simply goes to RAM and RAM is lost when power is lost, so be sure this is not “sleep” or “hibernate”).

One way or another, you might need to spend significant time searching for what uses the space. I would start by cleaning out unused cache file the package manager would have:
sudo apt-get clean

It seems that 221 MB is not enough space used to make a big difference. There is an application called “filelight” you might want to use to find the content consumption, but I must warn you that the application itself is huge and will consume a lot of space. Then find out where the space is distributed.

You can clone the rootfs and use filelight on the loopback mounted clone from your host PC. Having this on the host PC makes searching much easier.

There are also tools like “find” which can find files or directories of a given size or larger, but this is hard to use unless you already know the subdirectory to start in for searching.

Thanks, but the application ‘filelight’ is just show big file, in fact, I want to know why the 2G space is lost.

I found someone said some files was delete, but now release, so I did below command to find it, but nothing.

lsof |grep -i delete |sort -nrk7 |head |awk ‘BEGIN{print “file-size”,“PID”,“system”}{print $7/1024/1024"M",$2,$9}’ | column -t

file-size PID system
64M 6747 /memfd:pulseaudio
0.0001297M 11293 3
0.0001297M 11293 3
0.0001297M 11293 3
0.0001297M 11293 3
0.0001297M 11293 3
0.0001297M 11293 3
0.0001297M 11293 3
0.0001297M 11293 3
0.0001297M 11293 3

I think the files you found from pulseaudio, combined, are probably much smaller than 2GB. Also, they are likely temporary and used while processing media, like audio from a web browser.

The reason for filelight is to find out where the biggest consumption of space is at, visually, via pie charts. I don’t know where the 2GB is at, but if you “drill down” to directories which seem to consume a lot, then you might find what you are looking for. Do beware you shouldn’t just go around deleting files since you might delete something important.

One example of something to not delete is a swap file. Often this is just a “zram” file, and not a “real” file (it is in RAM and pretending to be a block device). If you run the command “sudo swapon -s”, do you see anything other than zram?

You basically have a mystery, and unless you catch the particular login which bumps usage up by 2GB and check logs, there isn’t much you can do to directly find out where this is at. I do want to emphasize though that there is a significant possibility that it isn’t something you can just delete without consequences.

When I did ‘sudo swapon -s’, show below result:
swapon -s
Filename Type Size Used Priority
/dev/zram0 partition 507416 0 5
/dev/zram1 partition 507416 0 5
/dev/zram2 partition 507416 0 5
/dev/zram3 partition 507416 0 5

and when I did same command in other normal device, same result.

Why I want to know where is the 2G space, beacuse I use command df and du, the space is matched, nothing is lost.

In addtion, I found some message from dmesg:

[ 4.980496] zram: Added device: zram0
[ 4.987133] zram: Added device: zram1
[ 4.988322] zram: Added device: zram2
[ 4.997019] zram: Added device: zram3
[ 5.035760] zram0: detected capacity change from 0 to 519598080
[ 5.054019] Adding 507416k swap on /dev/zram0. Priority:5 extents:1 across:507416k SS
[ 5.062093] zram1: detected capacity change from 0 to 519598080
[ 5.084607] Adding 507416k swap on /dev/zram1. Priority:5 extents:1 across:507416k SS
[ 5.104261] zram2: detected capacity change from 0 to 519598080
[ 5.117954] Adding 507416k swap on /dev/zram2. Priority:5 extents:1 across:507416k SS
[ 5.128629] zram3: detected capacity change from 0 to 519598080
[ 5.141781] Adding 507416k swap on /dev/zram3. Priority:5 extents:1 across:507416k SS

[ 316.522889] EXT4-fs (mmcblk0p1): error count since last fsck: 1150
[ 316.529482] EXT4-fs (mmcblk0p1): initial error at time 1648534020: ext4_orphan_get:1184
[ 316.537852] EXT4-fs (mmcblk0p1): last error at time 1655777941: ext4_mb_generate_buddy:759

I will check kernel source for above some EXT4-fs message, and hope it helpful for you.

This confirms the space is not consumed by swap. The zram is just literally RAM, but allocated to pretend it is disk space. I had hoped it might count for used space, but it does not.

The error count on ext4 though is an issue, although not necessarily one of excess used space. Apparently the system was shut down incorrectly, and the ext4 journal stopped some corruption, but the need to remove an “orphan” node tends to say there may have been some file which had unrecoverable problems and might now be missing (however, once again, a missing file won’t account for space used).

There are basically two choices of issues: A lot of small things consuming space and adding up to a lot of space, or a few large things all by themselves. I had kind of hoped you could go through with filelight (or any other method) and find out where the space is consumed (but it only matters if it is abnormal use of space…the operating system itself is expected to have some large content).

One possibility I had not thought of before is that inefficient space use can be a problem. To that extent “inodes” can run out. What do you see from:
df -i -h -T
(every file has at least one inode, and as the file grows, a chain of inodes are used for a particular block of bytes…the reason a disk is called a “block device”…running out of inodes can fill a device even if very few bytes are stored)

@linuxdev
Hi, did some test for your suggest, and I find 10 most bigger files, please check it, thanks.

cidi@cidi:~$ df -i -h -T
Filesystem Type Inodes IUsed IFree IUse% Mounted on
/dev/mmcblk0p1 ext4 896K 239K 658K 27% /
none devtmpfs 436K 706 436K 1% /dev
tmpfs tmpfs 496K 14 496K 1% /dev/shm
tmpfs tmpfs 496K 3.4K 493K 1% /run
tmpfs tmpfs 496K 6 496K 1% /run/lock
tmpfs tmpfs 496K 18 496K 1% /sys/fs/cgroup
tmpfs tmpfs 496K 23 496K 1% /run/user/120
tmpfs tmpfs 496K 12 496K 1% /run/user/1000
cidi@cidi:~$ sudo su
[sudo] password for cidi:
root@cidi:/home/cidi# find / -type f -printf ‘%s %p\n’ 2>&1 \

 | grep -v 'Permission denied' \
 | sort -nr \
 | head -10

3221225472 /home/data/data_log
2147483648 /home/data/data_record
92348298 /usr/local/lib/libavcodec.a
78168856 /home/cidi/.vscode-server/bin/dfd34e8260c270da74b5c2d86d61aee4b6d56977/node
61753688 /usr/bin/dockerd
61028160 /usr/lib/aarch64-linux-gnu/libLLVM-9.so.1
54568960 /opt/ota_package/kernel-extmod-ubuntu18.04_aarch64.tar
53780480 /opt/ota_package/kernel-extmod-linux_x86_64.tar
52516168 /usr/bin/docker
48741256 /var/lib/apt/lists/ports.ubuntu.com_ubuntu-ports_dists_bionic_universe_binary-arm64_Packages
root@cidi:/home/cidi#
root@cidi:/home/cidi#
root@cidi:/home/cidi# du -sh /home/
4.4G /home/
root@cidi:/home/cidi# for i i /; do du -sh $i; done
bash: syntax error near unexpected token `i’
root@cidi:/home/cidi# for i in /
; do du -sh $i; done
11M /bin
60M /boot
4.0K /dev
13M /etc
4.4G /home
434M /lib
0 /log.txt
16K /lost+found
32K /media
4.0K /mnt
217M /opt
0 /proc
4.0K /README.txt
77M /root
20M /run
11M /sbin
8.0K /snap
4.0K /srv
0 /sys
7.5M /tmp
4.7G /usr
442M /var
root@cidi:/home/cidi# sudo du -h -d 1 /
434M /lib
11M /sbin
0 /sys
4.0K /mnt
4.0K /srv
11M /bin
32K /media
16K /lost+found
4.4G /home
7.5M /tmp
60M /boot
8.0K /snap
217M /opt
77M /root
4.7G /usr
13M /etc
20M /run
4.0K /dev
442M /var
0 /proc
11G /

Note that “df” output for any filesystem type other than “ext4” is just in RAM. Actual partition space, in this case, is only the line mentioning “/dev/mmcblk0p1”.

Yes, these two are very interesting consumption of space:

3221225472 /home/data/data_log
2147483648 /home/data/data_record

The first is about 3 GB, the second is about 2 GB. These are about 5 GB combined.

I did counted these 2 big files, still 11G(actual 13G is used), and I need explain these files, I use below command to create 2 disk files then mount to a partition:

# sudo truncate -s 3G /home/data/data_log
# sudo truncate -s 2G /home/data/data_record
# mount -o loop /home/data/data_log /mnt/log
# mount -o loop /home/data/data_record /mnt/record

absolutely, I counted the space when have umount the /mnt/log and /mnt/record.

In adition, as we talked many times here, seems like the real reason still not clear, right?

There are many tools to find or view sizes, but unfortunately, only a few actually provide a good “picture” of what consumes space in a significant way. For example, your data_log and data_record present large sizes which consume from partition mmcblk0p1 (which is rootfs). This is why I recommended filelight: This allows a “pie chart” of directory content, and you can click on this to drill down; if a directory is small, there is no need to drill down, and if the directory is large, then you can drill down and visually see where the space is going. You could list every file and directory, put them in a spreadsheet, and sort, but although this is accurate, it is also an impractical pain to do so. Filelight itself requires a lot of space, but I still recommend it as a temporary tool.

I’m not sure what those “/home/data” files are, nor why you can truncate and mount them (most loopback filesystems will corrupt if you truncate without a filesystem-aware tool, except if the filesystem is contained in only part of the file). However, assuming you can mount them in a subdirectory of “/mnt”, the files themselves still consume from rootfs despite creating a new filesystem from “/mnt”. Mount or umount of those files will have no effect on rootfs other than making the content visible in a different way at a second location. Those files are enormous.

Also, be careful that some tools which summarize consider a kb as 1000 bytes, and a KiB as 1024 bytes (or GB as 1000x1000x1000, but GiB as 1024x1024x1024). Be certain that whatever app you use to list file sizes is using the same notation as the one you list either 11 GB or 13 GB from.

This might be going off on a tangent, but also beware that the ext4 filesystem is a tree-based system with “nodes” that link together. Those nodes are what you see as “inodes” in the “df -i -h -T” command. If you have an inode of a given size, then if the file requires one node to list it, expect that actual space consumed to be the sum total of inodes needed to contain that file content (there is metadata in each node, plus some actual file content). If a file requires 1000 inodes to contain it, then the actual file consumes the space of 1000 inodes despite the file content actually being some subset of that 1000 inode size (metadata consumes space too).

Think about when a filesystem is formatted: It probably has a block size of 512 bytes (I’m somewhat using “block size” and “inode size” interchangeably…on a tree based filesystem they are mostly the same thing), but it is possible to use a larger block size during format, e.g., 4096 bytes; with 512 bytes the smallest space a 1 byte file can consume is 512 bytes, and with 4096 bytes block size, the smallest amount of space a 1 byte file can consume is 4096 bytes. However, a file which is much larger than 4096 bytes will be more efficiently stored with a 4096 byte inode compared to a 512 byte inode since fewer inodes are required (each inode has the same amount of metadata required regardless of the inode size, leaving more room for actual data). If you have a lot of small files, perhaps 1 byte each, then 512 byte inode size is more efficient than 4096 byte inode size. I mention this because I’m wondering if you are comparing disk space consumption directly to file sizes…this won’t work out because of metadata in each block/inode.

First, I tested in my system(ubuntu 18.04 arm64):
Screenshot from 2022-07-05 14-20-26
So I’m confirm that 1M=1024*1024.
And I use command ‘fdisk -l’ to show the block size:
Screenshot from 2022-07-05 14-43-06

Second, I use the ‘Disk Usage Analyzer’ to show disk summary:


below image show the total 14G, it’s has over 2G free space, but when I click the root folder ‘/’, it show summary for folder ‘/’:


So the root folder ‘/’ just 10.9G, and other 2-3G is not avaliable.

Last, I installed the filelight and open it as root:


we also can found root folder ‘/’ just use 90% of whole disk, and over 2G is free space, then I let it show root folder ‘/’:

As below 2 tool test results, I’m sure root folder ‘/’ just use 90%(11G) of whole disk, and 10%(2G~3G) is not available, and tested other same series nvidia nano board(same ubuntu same kernel), it dont has this problem.

I suspect that the “missing” space is not missing, and that what you are seeing is due to inodes filling up. If you have a large number of small files, then you will reach 100% capacity before the content stored reaches maximum. Filling up inodes will reduce available space to 0% even if content stored is far below 100%. Most of the commands you see, such as du or df or ls, have the ability to display based on inodes. Can you see if the same conundrum exists if you find space occupied by inodes?

Btw, this is not a bug unless the filesystem is corrupt. Even if corrupt it would be difficult to cause space to be “wrong”. This also would not be a hardware issue. Sometimes people have to “tune” their filesystem by changing block size if the default is not efficient.

So you think this is inode was be fill up or have large number of small files, as I think it not have so many large number of small files, later I will check it.
Now can you give me advice to find the real reason or how to restore the 2G space? thanks.

An excerpt from your previous reply:

If you run “df” without “-i”, then it reports based on bytes of usage instead of inode. If there is no difference (or approximately equal size) between the percentages via inode and bytes, then you have a perfectly tuned block size. If the two percentages differ, then the implication is that you have either too large of a block size, or too small of a block size. However, your fdisk (you should use gdisk for GPT, not fdisk) shows your block size is 512 bytes, and this is both the default and the smallest size, so you can only create a larger block size. Looking at what “df -i” reported it seems like there is no issue of running out of inodes, but you probably have a large number of inodes compared to what is required for that number of bytes. I say “probably” because I don’t see the “df” output for mmcblk0p1 without the “-i”. It would be useful to see the result of “df -H -T /” side-by-side compared to “df -H -T -i /” (once with “-i”, and once without “-i”).

Remember that a certain number of inodes will exist regardless of whether or not they are used. More inodes means less actual storage space since these are “overhead”. As an example, if you have some sort of filesystem which uses 1 GB of space for inodes, and if inodes were 25% of capacity when space used is 100%, then it would mean you have four times too many inodes. From a different point of view, you could in the example have changed the inode size from 512 bytes to 2048 bytes to “tune” the filesystem and not waste inodes. Yet another view is to say that the average file size before optimizing is 2048 bytes, but if the average size had been 512 bytes, then there would be no waste. Since part of an inode is metadata, and part is actual storage, having 1/4 of the inodes would reduce the overhead to 1/4 of the amount used for 512 byte block size. I don’t have a side-by-side comparison of “df” with and without “-i so I don’t know specifics for your case, but tuning by reinstalling the filesystem with a larger block size would save some waste.

Yet another way of saying this is that any time the number of bytes you store in a file does not exactly match the block size (after metadata) you will find that storing that number of bytes consumes more space from the total pool of available storage than the actual file consumes. More files which are not a close match to the block size implies total consumed space will exceed file size totals multiplied by the number of files.

I do not know where your specific space consumption is used at, but those two large files have me curious as to what they are. Also, because these files are so large, it implies you might want to change the block size from 512 bytes to something larger, e.g., 4096 bytes (but to reiterate, you do not have the “-i” and “not -i” version of “df /” side-by-side to compare).

What you are looking at is likely not a bug. Finding large files is about the only way I know of to actually see where space can be recovered (but I’m guessing you don’t want to delete the earlier mentioned files “data_log” and “data_record”, which is where most of this space is…do you really need 3 GB of log?). Changing the block size is destructive, and if you wanted to try other sizes, then basically what you’d need to do (if you don’t want to lose the current filesystem) is to clone, create a copy of this clone in a rootfs file which is formatted to the new block size, and then flash the altered copy.

Keep in mind that if you increase block size, then larger files benefit, but any file significantly less than the new byte size will waste more space. It is a balancing game unless the filesystem is corrupt (if corrupt to that extent you’d need to reinstall).

Thanks for you reply.
I did test ‘df -i -h -T’ on issue device and normal device, got same result:

cidi@cidi:~$ df -i -h -T
Filesystem     Type     Inodes IUsed IFree IUse% Mounted on
/dev/mmcblk0p1 ext4       896K  238K  659K   **27%** /

The inodes usage both over 27%, so it’s not the issue.

second, as your suggested, I post the difference of df with ‘-i’ or not:
Screenshot from 2022-07-11 10-10-21

Now, so I have 2 ways to try:

  1. remove the ‘data_log’ and ‘data_record’, then check space usage agin.
  2. change inode size from 512 bytes to 2048 bytes.
    I decide to try one by one, what do you think?

It seems correct that removing the large files would resolve most of the space issue. It is also correct, that if the large files remain, then changing the inode size from 512 bytes to 2048 bytes would be of benefit. Regarding the latter, the metadata overhead would decrease by 4-to-1 on those large files (but actual result might not be 100% improvement because files smaller than 512 bytes would also waste space). Should those files be removed, then there isn’t much reason to change the block size to a larger value, but if you intend to keep larger files, or if your smaller files are typically at least 512 bytes, then such a change would be useful. I would definitely see if you can delete those files first and live without them, but assuming it is part of some useful program, then you may instead want to add more storage mounted on the location where those files exist (e.g., extra storage mounted on “/home” is simple enough if you have the hardware). Changing inode size would be the last step unless experimenting shows this to be useful.

Btw, if you were to implement external storage, then that storage could have a larger inode size, e.g., it could even be 4096 bytes, and the main system storage could remain at 512 bytes.

I removed these 2 big data files, then still have 2G is missed:
Screenshot from 2022-07-12 17-47-23
now what I should to do?