Have you ever experienced sudden crash of an application on Linux? Most people have. The most common reason of a crash is segmentation fault but this is not the subject of this article. Here, I’ll talk about how Linux does when memory is tight.
On Linux, by default, if you request for memory and it succeeds, there is no guarantee that there is really memory available. It does very little checking and lets you easily request for more memory than the system has (this is called overcommitting). The memory requested is just virtual. It is not mapped to any physical memory until something is written.
If a virtual memory address does not map to any physical memory and an access to it is requested, a page fault occurs. The OS will allocate some physically memory in order to hold the data. If there is trouble allocating the memory, the OS will try to put something in the memory into the swap file and use that part of memory. It there is trouble even putting something into the swap file, there is not enough memory and the OOM killer is activated to kill some processes that use too much memory until the memory is enough.
To prevent the killer from killing processes, you can disable overcommitting memory in the kernel:
echo 2 > /proc/sys/vm/overcommit_memory
By doing this, every request of memory will be checked against the formula:
total_memory_available = (swap_file + RAM * (/proc/sys/vm/overcommit_ratio / 100)
If the request exceeds the formula, it simply fails. Therefore, the OOM killer will not be activated at all if
/proc/sys/vm/overcommit_ratio is set below 100. Nevertheless, it has some drawbacks: Sometimes programs allocate memory but not use it. In such case, not all memory can be used by the system.