There are many tools to find or view sizes, but unfortunately, only a few actually provide a good “picture” of what consumes space in a significant way. For example, your data_log
and data_record
present large sizes which consume from partition mmcblk0p1 (which is rootfs). This is why I recommended filelight
: This allows a “pie chart” of directory content, and you can click on this to drill down; if a directory is small, there is no need to drill down, and if the directory is large, then you can drill down and visually see where the space is going. You could list every file and directory, put them in a spreadsheet, and sort, but although this is accurate, it is also an impractical pain to do so. Filelight itself requires a lot of space, but I still recommend it as a temporary tool.
I’m not sure what those “/home/data
” files are, nor why you can truncate and mount them (most loopback filesystems will corrupt if you truncate without a filesystem-aware tool, except if the filesystem is contained in only part of the file). However, assuming you can mount them in a subdirectory of “/mnt
”, the files themselves still consume from rootfs despite creating a new filesystem from “/mnt
”. Mount or umount
of those files will have no effect on rootfs other than making the content visible in a different way at a second location. Those files are enormous.
Also, be careful that some tools which summarize consider a kb
as 1000 bytes, and a KiB
as 1024 bytes (or GB
as 1000x1000x1000
, but GiB
as 1024x1024x1024
). Be certain that whatever app you use to list file sizes is using the same notation as the one you list either 11 GB or 13 GB from.
This might be going off on a tangent, but also beware that the ext4
filesystem is a tree-based system with “nodes” that link together. Those nodes are what you see as “inodes” in the “df -i -h -T
” command. If you have an inode of a given size, then if the file requires one node to list it, expect that actual space consumed to be the sum total of inodes needed to contain that file content (there is metadata in each node, plus some actual file content). If a file requires 1000 inodes to contain it, then the actual file consumes the space of 1000 inodes despite the file content actually being some subset of that 1000 inode size (metadata consumes space too).
Think about when a filesystem is formatted: It probably has a block size of 512 bytes (I’m somewhat using “block size” and “inode size” interchangeably…on a tree based filesystem they are mostly the same thing), but it is possible to use a larger block size during format, e.g., 4096 bytes; with 512 bytes the smallest space a 1 byte file can consume is 512 bytes, and with 4096 bytes block size, the smallest amount of space a 1 byte file can consume is 4096 bytes. However, a file which is much larger than 4096 bytes will be more efficiently stored with a 4096 byte inode compared to a 512 byte inode since fewer inodes are required (each inode has the same amount of metadata required regardless of the inode size, leaving more room for actual data). If you have a lot of small files, perhaps 1 byte each, then 512 byte inode size is more efficient than 4096 byte inode size. I mention this because I’m wondering if you are comparing disk space consumption directly to file sizes…this won’t work out because of metadata in each block/inode.