deleting files does not free up space

I booted up my nvidia jetson board today and found that the opencv library was no longer found. I also found that my main storage was full, but no matter how many files I deleted and emptied from the trash can the amount of free memory was not going up.

Anyone have any fixes?

Which version of L4T? Try:

head -n 1 /etc/nv_tegra_release

To avoid complications of X11, use either a serial console or a text terminal of some sort and see what the result is of:

df -h /

ubuntu@tegra-ubuntu:~$ head -n 1 /etc/nv_tegra_release

R21 (release), REVISION: 2.0, GCID: 4814984, BOARD: ardbeg, EABI: hard, DATE: Mon Dec 1 22:48:21 UTC 2014

ubuntu@tegra-ubuntu:~ df -h / Filesystem Size Used Avail Use% Mounted on /dev/root 14G 6.8G 6.2G 53% / ubuntu@tegra-ubuntu:~

Ok I fixed it… I’m not sure if the reason the opencv lib was deleted is because of the following, correct me if I’m wrong.

I apparently ran out of inodes… which I had no idea even existed until I used my jetson tk1 in the way I did. I had ALOT of images which were very small for haar cascade training. These images were very small 100x100 images of 30-80kb. These images were all on the 16 gigs of main storage on the jetson as well.

So I basically ran out of inodes before I ran out of actual disk space and that is what caused the problem?

I’m still fairly new to linux, but when I googled my problem and asked around on irc the general consensus was that I ran out of inodes.

inodes are part of the overhead of a file system. Other file systems may structure it differently, but the essense is that you can’t find files, directories, or other essential parts of the file system without some way of identifying it (which uses up space).

Most file systems can be tuned towards “lots of little files” or towards “a few huge files” during creation of the file system…a setting of how big of a data chunk belongs to one inode. Any file consuming a space smaller than what an inode uses is lost as overhead; any file too big for a single inode requires multiples of the inode overhead that could have occurred only once had the inode size been big enough.

For ext4, see the “-I” option of “mkfs.ext4”. The flash program runs mkfs on a loopback mountable image, which is then flashed over to the Jetson…so the program could have its mkfs command modified for a larger number of smaller files via a smaller “-I” option. This makes sense when you know ahead of time that you’re going to have large numbers of smaller files.

Try by removing .nv at home/nvidia