2G of partitions space is lost

Hello, We have over 20 nano devices, recently found the partition space is lost 2G:


As this picture, in fact, just have 11G, but we found 13G, closed to full, now have 2 times in different device,
so I checked dmesg, found some text:

[ 1.920693] EXT4-fs (mmcblk0p1): warning: mounting fs with errors, running e2fsck is recommended
[ 1.922223] EXT4-fs (mmcblk0p1): recovery complete
[ 1.922230] EXT4-fs (mmcblk0p1): mounted filesystem with ordered data mode. Opts: (null)

Device basic info:
jetpack 4.4
ubuntu 18.04
kernel 4.9.140-tegra

Is someone have got same problem?

I couldn’t tell you where the space went, but the errors are from an improper shutdown. Had it locked up and had to have the power pulled? Or has power been removed while it was running? The particular error seemed to be within the journal limits, and so although you might have lost some files being written at the time of power removal (there is almost always some sort of temp file being written), it is unlikely to be significant in size unless you were purposely writing a very large file. How do you shut these down?

Most of the NVIDIA development content is in “/usr/local”, so if this has a high storage content amount, then I’d say this is more or less “normal”. What do you see from:
sudo du -h -s /usr/local

Hi, thanks for your reply,

$ sudo du -h -s /usr/local
221M /usr/local

So can you give me some advice to avoid this problem?

There is more than one problem, and I think the improper shutdown is unrelated to the space consumption. For improper shutdown the fix is to always shut down normally without cutting power (and sometimes shutdown might be misleading if it is only “sleep” instead of actual shutdown…sleep simply goes to RAM and RAM is lost when power is lost, so be sure this is not “sleep” or “hibernate”).

One way or another, you might need to spend significant time searching for what uses the space. I would start by cleaning out unused cache file the package manager would have:
sudo apt-get clean

It seems that 221 MB is not enough space used to make a big difference. There is an application called “filelight” you might want to use to find the content consumption, but I must warn you that the application itself is huge and will consume a lot of space. Then find out where the space is distributed.

You can clone the rootfs and use filelight on the loopback mounted clone from your host PC. Having this on the host PC makes searching much easier.

There are also tools like “find” which can find files or directories of a given size or larger, but this is hard to use unless you already know the subdirectory to start in for searching.

Thanks, but the application ‘filelight’ is just show big file, in fact, I want to know why the 2G space is lost.

I found someone said some files was delete, but now release, so I did below command to find it, but nothing.

lsof |grep -i delete |sort -nrk7 |head |awk ‘BEGIN{print “file-size”,“PID”,“system”}{print $7/1024/1024"M",$2,$9}’ | column -t

file-size PID system
64M 6747 /memfd:pulseaudio
0.0001297M 11293 3
0.0001297M 11293 3
0.0001297M 11293 3
0.0001297M 11293 3
0.0001297M 11293 3
0.0001297M 11293 3
0.0001297M 11293 3
0.0001297M 11293 3
0.0001297M 11293 3

I think the files you found from pulseaudio, combined, are probably much smaller than 2GB. Also, they are likely temporary and used while processing media, like audio from a web browser.

The reason for filelight is to find out where the biggest consumption of space is at, visually, via pie charts. I don’t know where the 2GB is at, but if you “drill down” to directories which seem to consume a lot, then you might find what you are looking for. Do beware you shouldn’t just go around deleting files since you might delete something important.

One example of something to not delete is a swap file. Often this is just a “zram” file, and not a “real” file (it is in RAM and pretending to be a block device). If you run the command “sudo swapon -s”, do you see anything other than zram?

You basically have a mystery, and unless you catch the particular login which bumps usage up by 2GB and check logs, there isn’t much you can do to directly find out where this is at. I do want to emphasize though that there is a significant possibility that it isn’t something you can just delete without consequences.

When I did ‘sudo swapon -s’, show below result:
swapon -s
Filename Type Size Used Priority
/dev/zram0 partition 507416 0 5
/dev/zram1 partition 507416 0 5
/dev/zram2 partition 507416 0 5
/dev/zram3 partition 507416 0 5

and when I did same command in other normal device, same result.

Why I want to know where is the 2G space, beacuse I use command df and du, the space is matched, nothing is lost.

In addtion, I found some message from dmesg:

[ 4.980496] zram: Added device: zram0
[ 4.987133] zram: Added device: zram1
[ 4.988322] zram: Added device: zram2
[ 4.997019] zram: Added device: zram3
[ 5.035760] zram0: detected capacity change from 0 to 519598080
[ 5.054019] Adding 507416k swap on /dev/zram0. Priority:5 extents:1 across:507416k SS
[ 5.062093] zram1: detected capacity change from 0 to 519598080
[ 5.084607] Adding 507416k swap on /dev/zram1. Priority:5 extents:1 across:507416k SS
[ 5.104261] zram2: detected capacity change from 0 to 519598080
[ 5.117954] Adding 507416k swap on /dev/zram2. Priority:5 extents:1 across:507416k SS
[ 5.128629] zram3: detected capacity change from 0 to 519598080
[ 5.141781] Adding 507416k swap on /dev/zram3. Priority:5 extents:1 across:507416k SS

[ 316.522889] EXT4-fs (mmcblk0p1): error count since last fsck: 1150
[ 316.529482] EXT4-fs (mmcblk0p1): initial error at time 1648534020: ext4_orphan_get:1184
[ 316.537852] EXT4-fs (mmcblk0p1): last error at time 1655777941: ext4_mb_generate_buddy:759

I will check kernel source for above some EXT4-fs message, and hope it helpful for you.

This confirms the space is not consumed by swap. The zram is just literally RAM, but allocated to pretend it is disk space. I had hoped it might count for used space, but it does not.

The error count on ext4 though is an issue, although not necessarily one of excess used space. Apparently the system was shut down incorrectly, and the ext4 journal stopped some corruption, but the need to remove an “orphan” node tends to say there may have been some file which had unrecoverable problems and might now be missing (however, once again, a missing file won’t account for space used).

There are basically two choices of issues: A lot of small things consuming space and adding up to a lot of space, or a few large things all by themselves. I had kind of hoped you could go through with filelight (or any other method) and find out where the space is consumed (but it only matters if it is abnormal use of space…the operating system itself is expected to have some large content).

One possibility I had not thought of before is that inefficient space use can be a problem. To that extent “inodes” can run out. What do you see from:
df -i -h -T
(every file has at least one inode, and as the file grows, a chain of inodes are used for a particular block of bytes…the reason a disk is called a “block device”…running out of inodes can fill a device even if very few bytes are stored)

@linuxdev
Hi, did some test for your suggest, and I find 10 most bigger files, please check it, thanks.

cidi@cidi:~$ df -i -h -T
Filesystem Type Inodes IUsed IFree IUse% Mounted on
/dev/mmcblk0p1 ext4 896K 239K 658K 27% /
none devtmpfs 436K 706 436K 1% /dev
tmpfs tmpfs 496K 14 496K 1% /dev/shm
tmpfs tmpfs 496K 3.4K 493K 1% /run
tmpfs tmpfs 496K 6 496K 1% /run/lock
tmpfs tmpfs 496K 18 496K 1% /sys/fs/cgroup
tmpfs tmpfs 496K 23 496K 1% /run/user/120
tmpfs tmpfs 496K 12 496K 1% /run/user/1000
cidi@cidi:~$ sudo su
[sudo] password for cidi:
root@cidi:/home/cidi# find / -type f -printf ‘%s %p\n’ 2>&1 \

 | grep -v 'Permission denied' \
 | sort -nr \
 | head -10

3221225472 /home/data/data_log
2147483648 /home/data/data_record
92348298 /usr/local/lib/libavcodec.a
78168856 /home/cidi/.vscode-server/bin/dfd34e8260c270da74b5c2d86d61aee4b6d56977/node
61753688 /usr/bin/dockerd
61028160 /usr/lib/aarch64-linux-gnu/libLLVM-9.so.1
54568960 /opt/ota_package/kernel-extmod-ubuntu18.04_aarch64.tar
53780480 /opt/ota_package/kernel-extmod-linux_x86_64.tar
52516168 /usr/bin/docker
48741256 /var/lib/apt/lists/ports.ubuntu.com_ubuntu-ports_dists_bionic_universe_binary-arm64_Packages
root@cidi:/home/cidi#
root@cidi:/home/cidi#
root@cidi:/home/cidi# du -sh /home/
4.4G /home/
root@cidi:/home/cidi# for i i /; do du -sh $i; done
bash: syntax error near unexpected token `i’
root@cidi:/home/cidi# for i in /
; do du -sh $i; done
11M /bin
60M /boot
4.0K /dev
13M /etc
4.4G /home
434M /lib
0 /log.txt
16K /lost+found
32K /media
4.0K /mnt
217M /opt
0 /proc
4.0K /README.txt
77M /root
20M /run
11M /sbin
8.0K /snap
4.0K /srv
0 /sys
7.5M /tmp
4.7G /usr
442M /var
root@cidi:/home/cidi# sudo du -h -d 1 /
434M /lib
11M /sbin
0 /sys
4.0K /mnt
4.0K /srv
11M /bin
32K /media
16K /lost+found
4.4G /home
7.5M /tmp
60M /boot
8.0K /snap
217M /opt
77M /root
4.7G /usr
13M /etc
20M /run
4.0K /dev
442M /var
0 /proc
11G /

Note that “df” output for any filesystem type other than “ext4” is just in RAM. Actual partition space, in this case, is only the line mentioning “/dev/mmcblk0p1”.

Yes, these two are very interesting consumption of space:

3221225472 /home/data/data_log
2147483648 /home/data/data_record

The first is about 3 GB, the second is about 2 GB. These are about 5 GB combined.

I did counted these 2 big files, still 11G(actual 13G is used), and I need explain these files, I use below command to create 2 disk files then mount to a partition:

# sudo truncate -s 3G /home/data/data_log
# sudo truncate -s 2G /home/data/data_record
# mount -o loop /home/data/data_log /mnt/log
# mount -o loop /home/data/data_record /mnt/record

absolutely, I counted the space when have umount the /mnt/log and /mnt/record.

In adition, as we talked many times here, seems like the real reason still not clear, right?

There are many tools to find or view sizes, but unfortunately, only a few actually provide a good “picture” of what consumes space in a significant way. For example, your data_log and data_record present large sizes which consume from partition mmcblk0p1 (which is rootfs). This is why I recommended filelight: This allows a “pie chart” of directory content, and you can click on this to drill down; if a directory is small, there is no need to drill down, and if the directory is large, then you can drill down and visually see where the space is going. You could list every file and directory, put them in a spreadsheet, and sort, but although this is accurate, it is also an impractical pain to do so. Filelight itself requires a lot of space, but I still recommend it as a temporary tool.

I’m not sure what those “/home/data” files are, nor why you can truncate and mount them (most loopback filesystems will corrupt if you truncate without a filesystem-aware tool, except if the filesystem is contained in only part of the file). However, assuming you can mount them in a subdirectory of “/mnt”, the files themselves still consume from rootfs despite creating a new filesystem from “/mnt”. Mount or umount of those files will have no effect on rootfs other than making the content visible in a different way at a second location. Those files are enormous.

Also, be careful that some tools which summarize consider a kb as 1000 bytes, and a KiB as 1024 bytes (or GB as 1000x1000x1000, but GiB as 1024x1024x1024). Be certain that whatever app you use to list file sizes is using the same notation as the one you list either 11 GB or 13 GB from.

This might be going off on a tangent, but also beware that the ext4 filesystem is a tree-based system with “nodes” that link together. Those nodes are what you see as “inodes” in the “df -i -h -T” command. If you have an inode of a given size, then if the file requires one node to list it, expect that actual space consumed to be the sum total of inodes needed to contain that file content (there is metadata in each node, plus some actual file content). If a file requires 1000 inodes to contain it, then the actual file consumes the space of 1000 inodes despite the file content actually being some subset of that 1000 inode size (metadata consumes space too).

Think about when a filesystem is formatted: It probably has a block size of 512 bytes (I’m somewhat using “block size” and “inode size” interchangeably…on a tree based filesystem they are mostly the same thing), but it is possible to use a larger block size during format, e.g., 4096 bytes; with 512 bytes the smallest space a 1 byte file can consume is 512 bytes, and with 4096 bytes block size, the smallest amount of space a 1 byte file can consume is 4096 bytes. However, a file which is much larger than 4096 bytes will be more efficiently stored with a 4096 byte inode compared to a 512 byte inode since fewer inodes are required (each inode has the same amount of metadata required regardless of the inode size, leaving more room for actual data). If you have a lot of small files, perhaps 1 byte each, then 512 byte inode size is more efficient than 4096 byte inode size. I mention this because I’m wondering if you are comparing disk space consumption directly to file sizes…this won’t work out because of metadata in each block/inode.