We've Moved to the AWS Docs! 🚀
This content has been updated and relocated to improve your experience. Please visit our new site for the latest version: AWS EKS Best Practices Guide on the AWS Docs
Bookmarks and links will continue to work, but we recommend updating them for faster access in the future.
Cluster Services¶
Cluster services run inside an EKS cluster, but they are not user workloads. If you have a Linux server you often need to run services like NTP, syslog, and a container runtime to support your workloads. Cluster services are similar, supporting services that help you automate and operate your cluster. In Kubernetes these are usually run in the kube-system namespace and some are run as DaemonSets.
Cluster services are expected to have a high up-time and are often critical during outages and for troubleshooting. If a core cluster service is not available you may lose access to data that can help recover or prevent an outage (e.g. high disk utilization). They should run on dedicated compute instances such as a separate node group or AWS Fargate. This will ensure that the cluster services are not impacted on shared instances by workloads that may be scaling up or using more resources.
Scale CoreDNS¶
Scaling CoreDNS has two primary mechanisms. Reducing the number of calls to the CoreDNS service and increasing the number of replicas.
Reduce external queries by lowering ndots¶
The ndots setting specifies how many periods (a.k.a. "dots") in a domain name are considered enough to avoid querying DNS. If your application has an ndots setting of 5 (default) and you request resources from an external domain such as api.example.com (2 dots) then CoreDNS will be queried for each search domain defined in /etc/resolv.conf for a more specific domain. By default the following domains will be searched before making an external request.
api.example.<namespace>.svc.cluster.local
api.example.svc.cluster.local
api.example.cluster.local
api.example.<region>.compute.internal
The namespace
and region
values will be replaced with your workloads namespace and your compute region. You may have additional search domains based on your cluster settings.
You can reduce the number of requests to CoreDNS by lowering the ndots option of your workload or fully qualifying your domain requests by including a trailing . (e.g. api.example.com.
). If your workload connects to external services via DNS we recommend setting ndots to 2 so workloads do not make unnecessary, cluster DNS queries inside the cluster. You can set a different DNS server and search domain if the workload doesn’t require access to services inside the cluster.
If you lower ndots to a value that is too low or the domains you are connecting to do not include enough specificity (including trailing .) then it is possible DNS lookups will fail. Make sure you test how this setting will impact your workloads.
Scale CoreDNS Horizontally¶
CoreDNS instances can scale by adding additional replicas to the deployment. It's recommended you use NodeLocal DNS or the cluster proportional autoscaler to scale CoreDNS.
NodeLocal DNS will require run one instance per node—as a DaemonSet—which requires more compute resources in the cluster, but it will avoid failed DNS requests and decrease the response time for DNS queries in the cluster. The cluster proportional autoscaler will scale CoreDNS based on the number of nodes or cores in the cluster. This isn’t a direct correlation to request queries, but can be useful depending on your workloads and cluster size. The default proportional scale is to add an additional replica for every 256 cores or 16 nodes in the cluster—whichever happens first.
Scale Kubernetes Metrics Server Vertically¶
The Kubernetes Metrics Server supports horizontal and vertical scaling. By horizontally scaling the Metrics Server it will be highly available, but it will not scale horizontally to handle more cluster metrics. You will need to vertically scale the Metrics Server based on their recommendations as nodes and collected metrics are added to the cluster.
The Metrics Server keeps the data it collects, aggregates, and serves in memory. As a cluster grows, the amount of data the Metrics Server stores increases. In large clusters the Metrics Server will require more compute resources than the memory and CPU reservation specified in the default installation. You can use the Vertical Pod Autoscaler (VPA) or Addon Resizer to scale the Metrics Server. The Addon Resizer scales vertically in proportion to worker nodes and VPA scales based on CPU and memory usage.
CoreDNS lameduck duration¶
Pods use the kube-dns
Service for name resolution. Kubernetes uses destination NAT (DNAT) to redirect kube-dns
traffic from nodes to CoreDNS backend pods. As you scale the CoreDNS Deployment, kube-proxy
updates iptables rules and chains on nodes to redirect DNS traffic to CoreDNS pods. Propagating new endpoints when you scale up and deleting rules when you scale down CoreDNS can take between 1 to 10 seconds depending on the size of the cluster.
This propagation delay can cause DNS lookup failures when a CoreDNS pod gets terminated yet the node’s iptables rules haven’t been updated. In this scenario, the node may continue to send DNS queries to a terminated CoreDNS Pod.
You can reduce DNS lookup failures by setting a lameduck duration in your CoreDNS pods. While in lameduck mode, CoreDNS will continue to respond to in-flight requests. Setting a lameduck duration will delay the CoreDNS shutdown process, allowing nodes the time they need to update their iptables rules and chains.
We recommend setting CoreDNS lameduck duration to 30 seconds.
CoreDNS readiness probe¶
We recommend using /ready
instead of /health
for CoreDNS's readiness probe.
In alignment with the earlier recommendation to set the lameduck duration to 30 seconds, providing ample time for the node's iptables rules to be updated before pod termination, employing /ready
instead of /health
for the CoreDNS readiness probe ensures that the CoreDNS pod is fully prepared at startup to promptly respond to DNS requests.
For more information about the CoreDNS Ready plugin please refer to https://coredns.io/plugins/ready/
Logging and monitoring agents¶
Logging and monitoring agents can add significant load to your cluster control plane because the agents query the API server to enrich logs and metrics with workload metadata. The agent on a node only has access to the local node resources to see things like container and process name. Querying the API server it can add more details such as Kubernetes deployment name and labels. This can be extremely helpful for troubleshooting but detrimental to scaling.
Because there are so many different options for logging and monitoring we cannot show examples for every provider. With fluentbit we recommend enabling Use_Kubelet to fetch metadata from the local kubelet instead of the Kubernetes API Server and set Kube_Meta_Cache_TTL
to a number that reduces repeated calls when data can be cached (e.g. 60).
Scaling monitoring and logging has two general options:
- Disable integrations
- Sampling and filtering
Disabling integrations is often not an option because you lose log metadata. This eliminates the API scaling problem, but it will introduce other issues by not having the required metadata when needed.
Sampling and filtering reduces the number of metrics and logs that are collected. This will lower the amount of requests to the Kubernetes API, and it will reduce the amount of storage needed for the metrics and logs that are collected. Reducing the storage costs will lower the cost for the overall system.
The ability to configure sampling depends on the agent software and can be implemented at different points of ingestion. It’s important to add sampling as close to the agent as possible because that is likely where the API server calls happen. Contact your provider to find out more about sampling support.
If you are using CloudWatch and CloudWatch Logs you can add agent filtering using patterns described in the documentation.
To avoid losing logs and metrics you should send your data to a system that can buffer data in case of an outage on the receiving endpoint. With fluentbit you can use Amazon Kinesis Data Firehose to temporarily keep data which can reduce the chance of overloading your final data storage location.