This is definitely wrong. Has the kernel ever been changed? You should never see a commented out symbol which is also either “=y
” or “=m
”. It isn’t the space I am wondering about, it is the leading hash mark, “#
”. This says the configuration is inert, and equivalent to not set.
“/etc/fstab
” is the full path to a file, and is not a directory. You would:
cd /etc
cat fstab
(or just upload “/etc/fstab
” as a file to the forum)
It is fascinating that your swapon worked and is doing what you want for the latest case of “/home/p/swapfile
”, and yet the config shows features commented out.
Also, the “/swapfile
” is preferred. The problem noted on security is that the swap file needs correct permissions. It was probably set readable by a non-root user.
If the system runs out of memory, then it will possibly do one of these:
- Use the “Out of Memory” killer to kill applications (known as “OOM”).
- Lock up and become unusable.
- Panic.
However, swap is not everything you think it is. It won’t solve some situations. Here is some background…
When a computer boots, before end users get to log in, devices are accessed via a physical absolute address. Those addresses are hard wired or in firmware. Later plug-n-play devices can be used which are more flexible. The PCIe bus itself is generally hard wired or based on firmware, but the cards in the slots of the PCIe bus are plug-n-play and don’t generally need the kernel to find them in some blind manner.
The GPU on a Jetson is integrated directly to the memory controller (making it an integrated “iGPU”). Its access is not PCIe, and it is found via either hard wiring an address or via firmware (such as a device tree).
Such devices are communicated with via a “device driver”. Those drivers use an actual exact physical address.
End user programs use a virtual address. When memory is assigned or allocated to a user program, the lowest address is 0x0 for all of them. The memory controller is using a bit of magic to make the programs think the memory is between 0x0 and some other address range. The memory might even be split up (fragmented) into multiple smaller chunks, and the memory controller is giving that end user program the illusion that the memory is one contiguous, unbroken chunk.
Drivers with physical addresses cannot use broken chunks in many cases, even if the memory controller uses its magic to rearrange things. End user programs can. Swap files and zram are virtual. They are part of a fragmented pool of memory which the memory controller can use to give separate fragments the illusion of being one big contiguous pool without breaks.
So if your end user program allocates too much memory, but you have swap, then you’re ok because swap will be added to the pool. You could eventually run out of swap though. Then your program will be killed via the OOM (it is actually a bit more complicated because program priorities might alter what gets killed).
If your end program depends on physical addresses, then swap will never take part. Swap won’t help there. This includes CUDA based programs because the GPU is using physical addressing. No matter how much swap you add, the GPU will never benefit directly from swap. It is true though that some other user space program could be killed, and this would allow some more memory for the GPU, but that is indirect, and it is not guaranteed to help (the new memory might be fragmented in such a way that the GPU still could not use it, or it might use only a subset of what the user space program’s kill released).
So if you have a program in user space, this is when swap helps. If you have CUDA or AI which is running out of memory in the part which uses physical addressing, then your swap will not help.
If you are doing this because of running out of memory, then you might state under what circumstances it is running out. There might be nothing you can do about it (although perhaps redesigning a CUDA code block could use less memory, e.g., using fewer CUDA cores could lead to not using as much physical addressing).