Book Image

Mastering Elastic Kubernetes Service on AWS

By : Malcolm Orr, Yang-Xin Cao (Eason)
5 (1)
Book Image

Mastering Elastic Kubernetes Service on AWS

5 (1)
By: Malcolm Orr, Yang-Xin Cao (Eason)

Overview of this book

Kubernetes has emerged as the de facto standard for container orchestration, with recent developments making it easy to deploy and handle a Kubernetes cluster. However, a few challenges such as networking, load balancing, monitoring, and security remain. To address these issues, Amazon EKS offers a managed Kubernetes service to improve the performance, scalability, reliability, and availability of AWS infrastructure and integrate with AWS networking and security services with ease. You’ll begin by exploring the fundamentals of Docker, Kubernetes, Amazon EKS, and its architecture along with different ways to set up EKS. Next, you’ll find out how to manage Amazon EKS, encompassing security, cluster authentication, networking, and cluster version upgrades. As you advance, you’ll discover best practices and learn to deploy applications on Amazon EKS through different use cases, including pushing images to ECR and setting up storage and load balancing. With the help of several actionable practices and scenarios, you’ll gain the know-how to resolve scaling and monitoring issues. Finally, you will overcome the challenges in EKS by developing the right skill set to troubleshoot common issues with the right logic. By the end of this Kubernetes book, you’ll be able to effectively manage your own Kubernetes clusters and other components on AWS.
Table of Contents (28 chapters)
1
Part 1: Getting Started with Amazon EKS
7
Part 2: Deep Dive into EKS
13
Part 3: Deploying an Application on EKS
20
Part 4: Advanced EKS Service Mesh and Scaling
24
Part 5: Overcoming Common EKS Challenges

Summary

In this chapter, we looked at the different ways to scale EKS compute nodes (EC2) to increase resilience and/or performance. We reviewed the different scaling dimensions for our clusters and then set up node group/ASG scaling using the standard K8s CA. We then discussed how CA can take some time to operate and is restricted to ASGs, and how Karpenter can be used to scale much more quickly without the need for node groups, which means you can configure lots of different instance types. We deployed Karpenter and then showed how it can be used to scale EC2-based worker nodes up and down more quickly than CA using different instance types to the existing node groups.

Once we reviewed how to scale worker nodes, we discussed how we can use HPA to scale pods across our worker nodes. We first looked at basic HPA functionality, which uses K8s Metrics Server to monitor pod CPU and memory statistics to add or remove pods from a deployment as required. We then considered that complex...