High memory usage the OS of jetson nano

Hi all,
I want to reduce the initial memory usage of jetson nano. When I restart and connect to jetson nano with ssh, the initial memory usage is about 400 MB, when I run python code the memory usage remain about 800 MB, when I reboot the system, again I get 400 MB then with run python code remain 800 MB. I remove the unity desktop with this link, But I want to drop memory usage into about 400 MB forever after stop the programs and python codes,

and I want is it possible to reduce the memory usage even smaller than 400 MB,
Is it possible to remove some unused package of jetapck? I want to have only cuda and cudnn from jetpack.

Be careful to distinguish between RAM which is truly in use and must stay in use, versus RAM being used for temporary purposes and which is always available for other use. RAM has many ways of being classified.

One very good example is that of filesystem caching. As the eMMC is read that content may be put into RAM to make the next read faster. This is a cache use. As soon as any application needs this RAM the cache will go away and the RAM becomes used by whatever other application needed it.

If you were to run a program which allocated a certain amount of RAM, then that is something which does not cache. This type of program, if there were not enough RAM, would consume the cache RAM used for speeding up filesystem reads (at the cost of filesystem read slowing down).

Cache is useful because you have the RAM, so you might as well take advantage of it. Why not use it if it is available for free?

Notice in your htop display that the approximately half of the memory bar chart is green (the left side). The right half is basically something available any time any application needs it. Only that green left half is really in use.

If you want to see a GUI app which is a bit more intuitive, then try “sudo apt-get install xosview”, and then run “xosview”. Enlarge this to the full window, and notice it has a legend to go with the different colors. Buffer and cache are not needed, but help with performance. Others might be part of a “pool” system of memory and would probably be best if they stick around, but may not be mandatory (“SLAB” is contiguous, which means it has special uses; once memory fragments certain drivers or hardware may no longer work even if total RAM is sufficient).

If you come into a situation where memory is insufficient, then you’ll need to specify the particular use case and details. If you are just working on generally reducing RAM usage, then it could be things are already working as they should.

Thanks.
Q1- In the first image, That’s used 785MB/387GB, but in the second image show me 1.5 GB mem used, why these are different? which is correct?
Q2 - In the first image, all Mem(used+cached) is 785MB or this is just for mem used?
Q3- In the second image in the cpu bar, What’s the red color? and IDLE?
Q4- How I can remove the cached memory?
Q5- Because I use gstreamer for decoding and that use buffer, Is it possible to at the some time I free the buffer space? How?

Screenshot from 2020-07-23 13-07-14

Both are correct since both show different classifications. Remember that some RAM is used because it is mandatory for something to run, while other types of RAM use are just taking advantage of RAM which is not in use as a method to improve performance. Those other types of RAM give up the RAM immediately if it is needed. So you have to look at classifications, and in this case, probably also color coding.

Note that memory which is “green” is memory you could reduce by removing the programs which use that memory. Other memory does not require any attention from you and it will release if needed.

This would require reading docs on the particular application displaying this information, and probably also at times information on the specific architecture. If you want something directly from Linux, then run command “cat /proc/meminfo”. The files in “/proc” are not real files, but are kernel RAM pretending to be a file.

In the case of “/proc/meminfo”, this is what the kernel itself keeps track of. These are classifications, and different classifications often overlap with other classifications. There are a lot of classifications in “/proc/meminfo”, and htop, xosview, and most other programs try to make some simplifications. The NVIDIA program “tegrastats” has some memory information along with other GPU-related stats.

From the man page of xosview (“man xosview”):

WIO:
       Time spent waiting for I/O to complete. Available on Linux kernel 2.6.0 and higher.

If you run anything needing more memory, then cached is automatically released. You do not want to remove cached memory, this is not an application consuming memory. Cache is spare memory, not used memory…but the spare memory had been used for reading some data in the past, and any application wanting that same data gets it faster as long as the data is in cache. When the cache is gone, then accessing that same data will be from a far slower device, perhaps millions of times slower. You can’t really remove cache, and if you did, then there would be a combination of no benefit and a much slower operating system. Removing cache is a really “bad” thing.

Someone with gstreamer knowledge would have to answer that, but basically ending the program ends the buffer space. Perhaps gstreamer has options on startup which can provide different behaviors related to buffers, but I don’t know enough about gstreamer to answer that. Keep in mind that allocating and deallocating buffer in an application is a slow process, and that some applications or groups of applications create a buffer “pool”, and then they can rapidly increase or decrease how much buffer is used depending on that particular moment in time. If a program pre-allocates a buffer, then it probably has a good reason for this. If the buffer does not go away after the program ends, then perhaps the program is running as a service and not as an individual program. Don’t know for gstreamer.

1 Like