Avoiding OOM errors¶
Windows does not have an out-of-memory process killer as Linux does. Windows always treats all user-mode memory allocations as virtual, and pagefiles are mandatory. The net effect is that Windows won't reach out of memory conditions the same way Linux does. Processes will page to disk instead of being subject to out of memory (OOM) termination. If memory is over-provisioned and all physical memory is exhausted, then paging can slow down performance.
Reserving system and kubelet memory¶
Different from Linux where
--kubelet-reserve capture resource reservation for kubernetes system daemons like kubelet, container runtime, etc; and
--system-reserve capture resource reservation for OS system daemons like sshd, udev and etc. On Windows these flags do not capture and set memory limits on kubelet or processes running on the node.
However, you can combine these flags to manage NodeAllocatable to reduce Capacity on the node with Pod manifest memory resource limit to control memory allocation per pod. Using this strategy you have a better control of memory allocation as well as a mechanism to minimize out-of-memory (OOM) on Windows nodes.
On Windows nodes, a best practice is to reserve at least 2GB of memory for the OS and process. Use
--system-reserve to reduce NodeAllocatable.
Following the Amazon EKS Self-managed Windows nodes documentation, use the CloudFormation template to launch a new Windows node group with customizations to kubelet configuration. The CloudFormation has an element called
BootstrapArguments which is the same as
KubeletExtraArgs. Use with the following flags and values:
If eksctl is the deployment tool, check the following documentation to customize the kubelet configuration https://eksctl.io/usage/customizing-the-kubelet/
Windows container memory requirements¶
As per Microsoft documentation, a Windows Server base image for NANO requires at least 30MB, whereas Server Core requires 45MB. These numbers grow as you add Windows components such as the .NET Framework, Web Services as IIS and applications.
It is essential for you to know the minimum amount of memory required by your Windows container image, i.e. the base image plus its application layers, and set it as the container's resources/requests in the pod specification. You should also set a limit to avoid pods to consume all the available node memory in case of an application issue.
In the example below, when the Kubernetes scheduler tries to place a pod on a node, the pod's requests are used to determine which node has sufficient resources available for scheduling.
Using this approach minimizes the risks of memory exhaustion but does not prevent it happen. Using Amazon CloudWatch Metrics, you can set up alerts and remediations in case of memory exhaustion occurs.