Can i delete . cache file on my jetson nano to increase available space?

Hello.I need more space on my jetson nano. Can i delete . cache file or this action will cause problems in the system ?


I think technically you want the folder to exist, and programs will save cache there over time. I don’t know that every program does what it is supposed to do which might use it, but the idea is to speed up certain operations, and missing the cache would slow it down. Then, as you use the applications which cache, it will refill again. The biggest users of this space are almost always web browsers. If you have firefox (mozilla) or chromium, then they will store web page cache there. Losing this means they will reload certain content from the internet instead of just having that information immediately available.

The program “filelight” is useful to examine sizes inside some space, but filelight itself is rather large. If you were to install filelight (“sudo apt-get install filelight”), you could get an idea of proportions of what uses it via:
filelight ~/.cache

If you go this route, beware that you’ll have to keep doing this over and over as you use applications. Some applications, especially web browsers, will tend to offer a setting to how much cache they are allowed to use. If you first reduce that cache to the biggest users (and this is specific to each program how to work with cache settings…some have no settings), then you wouldn’t be constantly removing the content (your program would probably be using the network more for doing something).

If you were to choose to do this, then perhaps first you could test by moving the content. If nothing is locking a file at the time, then you’d try something like this:

# Backup sort of temporary removal which can be reversed:
cd
mv -i ./cache ./old_cache
mkdir cache

# If something went wrong:
cd
mv -f ./old_cache ./cache

(you could use cp instead of mv for restore; you could also use rsync)

My bet is that this is only a temporary solution, but quite useful if the system won’t boot due to not having enough disk space.

1 Like

Thanks for your reply.
I need additional space to docker save my container on the external disk.Unfortunatelly, i cant import it directly on the external disk.So I need 2gb more, because i dont have enogh space to save it on my device.
It is ok for me, if deleting of the .cache will clean my web browser data or even break it, if it wont corrupt my containers.
After extraction of the docker containers i will use backup anyway.

Is your external disk always present? If so, then you could simply mount a copy of “/home” on that disk to the “/home” mount point.

Yes it is. But i am not sure, if there is enough space on the external disk.

Is all of your docker space going into your home directory? From the Jetson, what do you see from:
df -H -T /home

How large is the external media? How much is needed?

Sorry fo the late reply.External media is 8gb.This will be enough for 1 my container (4.3 gb), but not for the whole home directory.However if i could docker save my continer directly on the external drive that would be great.

(But, unfortunatelly i cant, because of permission error: How to docker save 4gb container if i have only 1 gb left/How to docker save container directly on the external drive - #3 by AastaLLL)

Docker save command tries to create tar file in the home directory, but fails.Swapfile and .cache were deleted to save docker container, so It should work fine, but for some reason it doesnt:

As for the command df -H -T /home:


Available free space is diffrenet from the available free space on the picture above, because last pictrure was done after backup (without deleting of the swapfile (2gb) and .cache(2.6 gb) )

I’m kind of at a disadvantage here because I don’t use docker (I know I should, but I’ve not had a reason so far other than curiosity). I do see though that your home has only 1.2 GB left, so there is a good chance you could completely fill things up and then need to recover. Is that 1.2 GB already containing the docker content? Or do you need to add docker still and have only 1.2 GB?

What is the size of your external media? If the media is mounted somewhere, then you could cd to that location and run “df -H -T .” (the trailing “.” is important). If the external media is large enough, then you could actually back up your home to that media, and mount it on top of the old home. This would only temporarily hide the actual home, but if the media already has an exact clone (via rsync or other methods), then you won’t notice anything different other than any changes going to the external media…and unmounting would result in the original system having no idea that home has changed, it’ll just run as it always has.

EDIT: Also, you can perform a tar of any location directly to a remote computer’s external media via ssh. The concept of backup or restore can be piped through ssh to another computer. Or you can mount the media directly and backup or restore directly. The point is that your home can be recreated on other media and made as a temporary mount, it isn’t just for restoring the original location.

Thanks for the reply.Size of my current external media is 8 gb, but i probably can also use my other 128 gb sd card as external media.If so, i actually can change home directory location to the external drive.Unfortunatelly i didnt get how to do that.Could you give me an step-by step insruction/example/link to the guides about that, please ?

SSH also could solve my problem, but , unfortunately, i dont know how to use it too and need additional information.

Keep in mind that what I’m showing you is:

  • How to back up your /home to your external media.
  • How to temporarily mount this over your /home so you can use that space, and then later uncover the original home.
  • How to rsync in some way as to later on incrementally back up the same content without copying 100% of it.

First, you should know that your SD card or other external media needs to be formatted to ext4. Normally any external media would come with formatting as either VFAT (FAT32) or exFAT. When you do this the partition is no longer readable in Windows, and you’ll need to use it in Linux unless you wipe out the content and format it again as either FAT32 or exFAT.

Before you begin, any location you might be interested in knowing the size taken from that directory or subdirectories thereof can be found by naming that directory in combination with “sudo du -h -s /where/ever/that/is”. For example, this would tell you the actual content consumed for /home and its subdirectories:
sudo du -h -s /home

To know the space available on a partition which owns a given location you can do something similar with “df -H -T” (I have extra options in there for naming partition type and more human units like G for GB):
sudo df -H -T /home
(this tells you about the device owning /home and the filesystem)


Note: Certain operations cannot be completed on a mounted partition. Automount will often mount temporary media types somewhere in /media or /var/run. Further down is information on “df -H -T /some/where”, but you can run just “df -H -T” and you will see every block device, and if it is mounted, where it is mounted at. You can “sudo umount /dev/somedevice” or “sudo umount /some/mount/point” to unmount (command umount).


The following tells you about many of the lower level details of a given disk or device (you don’t normally need this). I am using gdisk since it is intended to work with UEFI and not just old style BIOS partitions. fdisk does the same thing, but it is more legacy in that it is intended for BIOS partitions and any understanding of UEFI partitioning is an afterthought (gdisk was designed for UEFI systems from the start):
sudo gdisk -l /dev/sdb
(My example lists the entire disk /dev/sdb, but it could also be something like external media, or the Jetson’s eMMC on those models; for example, it could be /dev/mmcblk0p1 for eMMC or /dev/mmcblk1p1 for many SD card models of Jetsons; the name varies on desktop PCs)

On temporary media such as an SD card or USB disk or USB thumb drive you can monitor the system’s log continuously with “dmesg --follow”. While monitoring this you can plug in an extra external media, e.g., a thumb drive or an SD card which is not part of the system’s rootfs, and it will tell you which device it is. It is rather important to know exactly which device it is at each plug-in. The numbering and timing of plug-ins can change the naming, and so before you do anything “dangerous” you have to always know the exact device name to avoid destroying some other software.

In my example I will be assuming it is “/dev/mmcblk2”. The first partition of one of these devices is “/dev/mmcblk2p1”. Note that the mmcblk2 is the device, and the mmcblk2p1 is the first partition of that device. You have to adjust for your case, e.g., if a USB SATA device it might be “/dev/sdb” as the disk, with the first partition as “/dev/sdb1”. Never ever use the wrong device name or partition number.

To gather information, gather about the device as a whole, which will automatically tell you about the partitions (you could name the partition, but you want to know about the whole device). So for example you could create a log of what /dev/mmcblk2p1 is:
sudo gdisk -l /dev/mmcblk2p1 2>&1 | tee log_gdisk.txt
(note that the “2>&1 | tee log_gdisk.txt” is the part which logs this for future use)

You can prepare to edit this device (which is destructive to the partition’s content):
sudo gdisk /dev/mmcblk2
(if you type “q” it quits; it will ask if you want to save changes or not, with “n” for no or “y” for yes)

You can then list the current disk (or SD card) the same as the “gdisk -l” does, but it won’t exit:
p (short for “print”)

Each partition has a type. Notice when you print the current setup there is a column called “Code”. For the usual Microsoft partition type it is 0700 (it is actual hexadecimal 0x700). The normal Linux filesystem type for ext4 is hex 8300.

Note that you can get a summary of commands in gdisk with “?”. This is what my gdisk help (?) shows:

Command (? for help): ?
b       back up GPT data to a file
c       change a partition's name
d       delete a partition
i       show detailed information on a partition
l       list known partition types
n       add a new partition
o       create a new empty GUID partition table (GPT)
p       print the partition table
q       quit without saving changes
r       recovery and transformation options (experts only)
s       sort partitions
t       change a partition's type code
v       verify disk
w       write table to disk and exit
x       extra functionality (experts only)
?       print this menu

The command you are interested in is “t” to change the partition type from 0700 to 8300. If your partition is the only partition, then the device will have only 1 under “Number” when you print (“p”) the disk information. Assuming partition 1 of device /dev/mmcblk2, meaning /dev/mmcblk2p1, you would use “t” to change type, you would name 1 for the partition, and you would name 8300 as the type. This did not yet save, gdisk is queuing up commands.

If you have done this, then p will now show as Code 8300. You can use w to write and quit. Or you could just q to quit without change (it might remind you there are changes and ask if you really want to quit without saving).

From outside of gdisk you could see stats via “sudo gdisk -l /dev/mmcblk2” and you’d see the 8300 Code.

Now you can format this. To format the first partition of device mmcblk2 (which is /dev/mmcblk2p1):
sudo mkfs.ext4 /dev/mmcblk2p1

This device is now able to understand Linux filesystem permissions, and thus it can now back up /home or even temporarily substitute by being mounted on /home.


To actually back up everything you will need to mount the partition. I will use the traditional mount point of “/mnt”. Keep in mind that when backing up “/home” the mount point should not be within “/home”. Assuming this is “/dev/mmcblk2p1”:
sudo mount /dev/mmcblk2p1 /mnt
(if this was auto mounted somewhere else you could use that mount point, but my illustration will use /mnt)

If we were copying only individual files, or copying across computers on a network, then I would not do this. However, copying “/home” implies entire directories. Since it is temporary media you are free to try this and fail and not care. You could just remove what you’ve saved to temporary media and start over (you would not need to partition or format again, that part is complete even if this fails).

Try this, which might take significant time…remember that we are copying the *conte

The following tells you about many of the lower level details of a given disk or device (you don’t normally need this). I am using gdisk since it is intended to work with UEFI and not just old stynt* of /home, and that subdirectories are the content, not individual files. This simplifies backup when it is “everything” and not incremental (incremental is faster):

cd /home
sudo cp -adpr * /mnt/
ls /mnt

Do all of the same subdirectories in “/home” now show up in “/mnt”? If so, you have a copy of your /home. If you were to mount this on “/home”, then your system would not know the difference, but content added to this would now go to the temporary media. If you were to reboot, then the media would not be mounted on /home, and everything would revert to the original home directory. You could even restore /home with this as a “snapshot”.

Keep in mind that I am not showing you how to permanently mount this, and that reboot would restore everything as it used to be, but typically you must do this as root, and you must not be using /home during the mount. Note that there is a difference between dropping into a root shell with “sudo -s”, versus actually becoming root with “sudo -”, and I am using the latter. Also, you want to do this on command line without GUI login…which means you can use ssh or serial console, or you can go to the local console with CTRL-ALT-F2 (or similar; the goal is no user is logged in to the GUI).

I think this will work, but if it does not, let me know, we can use other methods:

cd /
sudo -
mount /dev/mmcblk2p1 /home
ls /home

If it looks right and there was no issue, then you go back to the GUI and log in (e.g., ALT-F1 locally). During this session anything you do in “/home” goes to the new media and not the original disk. You could for example use more space if the temporary media has that much space. While logged in you could confirm your home directory is using this device:
df -H -T ~
(the “~” tilde is your home directory)

Reboot, and you have reverted to the old content.


rsync is a far more flexible way to back up and restore and if more flexibility is needed, then I can add information on that. For example, rsync would allow you to back up /home to another computer over ssh, and that computer could mount the external media. If only a few files had changed, then rsync would only update those files instead of copying everything (quite useful time saver over a slow network).

Note that you are free to delete the content in “~/.cache/*” on your temporary media at any time. This will still build back up over time as you use applications which cache, but if you don’t use those applications on the temporary media, it won’t matter. The original media would still have the cache (which improves performance while taking disk space).

Basically, if this is not going to work for you, describe exactly what you are doing so that it can be adapted.

Thanks a lot for such a detailed response.I am trying to get 2-3 gb more to docker save image with the results of my work (For now i ,unfortunatelly, still couldnt do that, because i dont have enough space on my sd card and because i couldnt docker save my image directly on the external drive).Then i am going to docker load it on the bigger sd card (128gb) to sovle all problems with a space and continue the work.

Hi @A98, from the firsts screenshots I see that you still have libreoffice and I suspect the default apps, you can try to unistall them, usually that frees up around 1GB. You can try

sudo apt-get purge thunderbird*
sudo apt-get remove --purge "libreoffice*"
sudo apt-get clean
sudo apt-get autoremove

But another way is to delete the temporary docker images, you can check if there is any image that was built during the docker build or similar, and delete it. To list the images:

sudo docker images

And then check the list to see if there are any images that are not needed anymore, and delete them with:

docker image rm

And lastly you can use the external storage to save docker data. Like over here.

Regards,
Andres
Embedded SW Engineer at RidgeRun
Contact us: support@ridgerun.com
Developers wiki: https://developer.ridgerun.c om/
Website: www.ridgerun.com

1 Like

In case its still not enough, here is a list of packages that you can remove with apt

thunderbird*
libreoffice*

nvidia-l4t-graphics-demos
nvidia-l4t-cuda
nvidia-l4t-multimedia*
chromium-browser
chromium-codecs-ffmpeg-extra
chromium-browser-l10n

# Bluetooth
blueman
gnome-bluetooth
pulseaudio-module-bluetooth
bluez
bluez-obexd
indicator-bluetooth ffmpeg

aisleriot
gnome-mahjongg
gnome-mines
gnome-sudoku
g++ g++-7  gcc  gcc-7  gdb gdbserver

# Audio
pulseaudio*

# VNC Server
remmina*

ubuntu-wallpapers*
youtube-dl
liblivemedia62 libvlc5  libvlccore9vlc-data vlc-plugin-base vlc-plugin-video-output mpv libwildmidi-config lxmusic libavresample3  libavutil55 libpostproc54 libswresample2 libswscale4  *mesa* *audio*  *-doc* *-dev*  

Just make sure that you dont need any of them.

Regards,
Andres

Hello.Thanks for the reply.I cleaned enough space to docker save the container and successfully created .tar file on my sd card.But, unfortunatelly, i still have a problem.I couldnt copy it on the external drive:


Сopying process wasnt complete, so i couldnt docker load from the copy of this .tar file on my othersd card. (I decided to docker load just in case):

How can i solve this error and copy my 4.4 gb .tar file on the external drive ?

Besides this a have a question.Could you explain me, please, what is these ?

I mean, that exactly do all these containers and how do they associated with my images? This commands gives a list of images, such as a0911018, which are not even showing by the command sudo docker images-a.

I launch my pogramms via image (not a container).For example: docker/[run.sh] -c trtpose2 —volume /home/atlas/Desktop/Jupiter1:/home/atlas/jetson-inference.
And after that, command sudo docker ps -a gives exactly the same result.So all these containers (which were shown by the command sudo docker ps- a) are useless for me now and i can delete them, or deleting process can affect on the work of my images ?

Thanks.

What filesystem type is your SD card using? If it is ext4 then there shouldn’t be a file size limit of any importance. If it is VFAT/exFAT/FAT32, then I would expect limitations. If you use “df -H -T”, and then look for wherever the SD card is mounted, you’ll be able to see filesystem type.

Hi @A98, yeah check the partition format as @linuxdev said, if its fat32 or similar, it would limit the max file size. And regarding the output of

docker ps -a

It shows all the containers, running or not running. So if you ran a container with run -it and then exit it, it will still be there using up resources. So usually if you are done with a container you can delete it like:

docker rm CONTAINER_ID

They can also be containers that were used during the docker build phase, but after that they can be deleted as well. If you what to make a full clean up, you can use:

docker rm $(docker ps -a | grep 1 | awk '{print $1}')

Check out the rm flag for docker run, if the container is meant to run and execute only once, it can be an easy to avoid leaving created containers every time you do a docker run. The only thing is that the data on the container will be lost, so if you need to access some output data after the container is done, make sure to map it to a host volume.
.
Regards,
Andres

I checked if the filesystem of my sd card has limitations, and i found out, that it has a file size limit.I solved this problem by pc with linux os.

I used card reader and copied my .tar file from my old sd card (with the jetson os) to the pc with linux. After that i copied .tar file from my pc with linux to my new sd card (with the jetson os).So, i finally transferred my container from my old sd card to my new sd card.Thanks a lot for the information and for the help.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.