Hi,
I know the memory is shared between CPU and GPU, so according to the following capture
I have my program using 40% of the memory, can I guess the rest is GPU memory usage?
I am investigating a memory leak, and right now I don’t know if my problem is on my GPU code or on my CPU code.
It seems to happen when I stop and start one (or more) gstreamer pipeline many times.
Thanks if someone has an idea, or any tool to see GPU memory usage on Jetson Nano.
Hi @tvlanaccess, you can see the GPU memory usage with this command:
$ cat /proc/meminfo | grep NvMapMemUsed
Is it possible to give GPU more memory (like 512 MB)?
Since the CPU and GPU share the same physical RAM on Jetson, the GPU already has access to nearly all of the board’s memory (all but a couple hundred MB, which is reserved for the kernel).
If you need more memory for GPU allocations, then you should free up system RAM by closing other running processes or services, switching to a lightweight window manager like LXDE, disabling the display, ect. You could also mount SWAP, which may allow system memory to be paged out and allow the GPU to allocate more.
1 Like
With jetson nano SD card ubuntu image, disabling desktop and containerd, I reduced RAM usage to something like 180MB of the 4GB.
I didn’t investigate, but I could remove a lot of things I don’t use.
One question @dusty_nv, when I reboot I have something like 180MB used, but when I run my program that uses gstreamer, with h264 decoder, some TensorRT engines, etc, after closing program, the RAM used is something like 700MB.
Is it something like static allocation of the system that will never decrease or something like this?
Look at this capture, nothing is running, and nothing using these MB of memory:
$ cat /proc/meminfo
MemTotal: 4059412 kB
MemFree: 1368412 kB
MemAvailable: 3694140 kB
Buffers: 77544 kB
Cached: 1857224 kB
SwapCached: 4492 kB
Active: 837264 kB
Inactive: 1117628 kB
Active(anon): 20716 kB
Inactive(anon): 16276 kB
Active(file): 816548 kB
Inactive(file): 1101352 kB
Unevictable: 12028 kB
Mlocked: 0 kB
SwapTotal: 2029696 kB
SwapFree: 1980768 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages: 31004 kB
Mapped: 36348 kB
Shmem: 4840 kB
Slab: 121688 kB
SReclaimable: 67744 kB
SUnreclaim: 53944 kB
KernelStack: 3696 kB
PageTables: 2392 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 4059400 kB
Committed_AS: 337768 kB
VmallocTotal: 263061440 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
AnonHugePages: 2048 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
NvMapMemFree: 505684 kB
NvMapMemUsed: 76 kB
CmaTotal: 475136 kB
CmaFree: 86016 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
EDIT:
Ok I see why I don’t have any process using such amount of memory, when I reboot the jetson I have:
NvMapMemFree: 0 kB
NvMapMemUsed: 76 kB
But when I do some stuff using the GPU, even after closing my program, I get this:
NvMapMemFree: 505684 kB
NvMapMemUsed: 76 kB
What does it mean exactly? this memory won’t be available for CPU RAM usage but used as preallocated buffers for any further GPU usage?
I’d like to understand the RAM usage, I have now a program that is receiving 5 rtsp streams (via gstreamer, decoding with nvv4l2decoder), and executing a yolo2 tiny via tensorrt, and I get this kind of htop result, what I don’t understand is that restart stream, the memory increases, but my program is “only” using 38% of the total, I have 900M used for the GPU (NvMapMemUsed).
How can I know what is using such RAM?
$ cat /proc/meminfo
MemTotal: 4059412 kB
MemFree: 142576 kB
MemAvailable: 144852 kB
Buffers: 15232 kB
Cached: 70500 kB
SwapCached: 1156 kB
Active: 1084008 kB
Inactive: 402440 kB
Active(anon): 1039324 kB
Inactive(anon): 388664 kB
Active(file): 44684 kB
Inactive(file): 13776 kB
Unevictable: 12028 kB
Mlocked: 0 kB
SwapTotal: 2029696 kB
SwapFree: 8 kB
Dirty: 24 kB
Writeback: 0 kB
AnonPages: 1411792 kB
Mapped: 29724 kB
Shmem: 15244 kB
Slab: 93428 kB
SReclaimable: 31256 kB
SUnreclaim: 62172 kB
KernelStack: 5024 kB
PageTables: 11580 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 4059400 kB
Committed_AS: 1676752 kB
VmallocTotal: 263061440 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
AnonHugePages: 817152 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
NvMapMemFree: 32832 kB
NvMapMemUsed: 912320 kB
CmaTotal: 475136 kB
CmaFree: 4576 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
$ cat /proc/buddyinfo
Node 0, zone DMA 443 429 356 239 403 212 56 7 0 0 0
Node 0, zone Normal 1136 1201 751 15 6 2 1 1 0 0 0
Yes I believe Linux does lazy release of physical memory, so it is normal.
The NvMap pool can be re-used to speed up future GPU allocations, but if the OS is running low on memory, it would send NvMap a shrinker notification and NvMap would release its free memory back to the OS. So yes, it can still be used as CPU memory.
See this post to read individual process memory usage from /proc/$pid
: https://stackoverflow.com/a/131399
To tell how much NvMap memory a process uses, it is recommended to monitor the baseline NvMapMemUsed
beforehand, and then subtract it from the increased amount while it is running.