Book Image

Mastering Elastic Kubernetes Service on AWS

By : Malcolm Orr, Yang-Xin Cao (Eason)
5 (1)
Book Image

Mastering Elastic Kubernetes Service on AWS

5 (1)
By: Malcolm Orr, Yang-Xin Cao (Eason)

Overview of this book

Kubernetes has emerged as the de facto standard for container orchestration, with recent developments making it easy to deploy and handle a Kubernetes cluster. However, a few challenges such as networking, load balancing, monitoring, and security remain. To address these issues, Amazon EKS offers a managed Kubernetes service to improve the performance, scalability, reliability, and availability of AWS infrastructure and integrate with AWS networking and security services with ease. You’ll begin by exploring the fundamentals of Docker, Kubernetes, Amazon EKS, and its architecture along with different ways to set up EKS. Next, you’ll find out how to manage Amazon EKS, encompassing security, cluster authentication, networking, and cluster version upgrades. As you advance, you’ll discover best practices and learn to deploy applications on Amazon EKS through different use cases, including pushing images to ECR and setting up storage and load balancing. With the help of several actionable practices and scenarios, you’ll gain the know-how to resolve scaling and monitoring issues. Finally, you will overcome the challenges in EKS by developing the right skill set to troubleshoot common issues with the right logic. By the end of this Kubernetes book, you’ll be able to effectively manage your own Kubernetes clusters and other components on AWS.
Table of Contents (28 chapters)
1
Part 1: Getting Started with Amazon EKS
7
Part 2: Deep Dive into EKS
13
Part 3: Deploying an Application on EKS
20
Part 4: Advanced EKS Service Mesh and Scaling
24
Part 5: Overcoming Common EKS Challenges

A deeper dive into containers

The container is a purely logical construction and consists of a set of technologies glued together by the container runtime. This section will provide a more detailed view of the technologies used in a Linux kernel to create and manage containers. The two foundational Linux services are namespaces and control groups:

  • Namespaces (in the context of Linux): A namespace is a feature of the Linux kernel used to partition kernel resources, allowing processes running within the namespace to be isolated from other processes. Each namespace will have its own process IDs (PIDs), hostname, network access, and so on.
  • Control groups: A control group (cgroup) is used to limit the usage by a process or set of processes of resources such as CPU, RAM, disk I/O, or network I/O. Originally developed by Google, this technology has been incorporated into the Linux kernel.

The combination of namespaces and control groups in Linux allows a container to be defined as a set of isolated processes (namespace) with resource limits (cgroups):

Figure 1.2 – The container as a combination of cgroup and namespace

Figure 1.2 – The container as a combination of cgroup and namespace

The way the container runtime image is created is important as it has a direct bearing on how that container works and is secured. A union filesystem (UFS) is a special filesystem used in container images and will be discussed next.

Getting to know union filesystems

A UFS is a type of filesystem that can merge/overlay multiple directories/files into a single view. It also gives the appearance of a single writable filesystem, but is read-only and does allow the modification of the original content. The most common example of this is OverlayFS, which is included in the Linux kernel and used by default by Docker.

A UFS is a very efficient way to merge content for a container image. Each set of discreet content is considered a layer, and layers can be reused between container images. Docker, for example, will use the Dockerfile to create a layered file based on a base image. An example is shown in the following diagram:

Figure 1.3 – Sample Docker image

Figure 1.3 – Sample Docker image

In Figure 1.3, the FROM command creates an initial layer from the ubuntu 18.04 image. The output from the two RUN commands creates discreet layers while the final step is for Docker to add a thin read/write layer where all changes to the running container are written. The MAINTAINER and CMD commands don’t generate layers.

Docker is the most prevalent container runtime environment and can be used on Windows, macOS, and Linux so it provides an easy way to learn how to build and run containers (although please note that the Windows and Linux operating systems are fundamentally different so, at present, you can’t run Windows containers on Linux). While the Docker binaries have been removed from the current version of Kubernetes, the concepts and techniques in the next section will help you understand how containers work at a fundamental level.

How to use Docker

The simplest way to get started with containers is to use Docker on your development machine. As the OCI has developed standardization for Docker images, images created locally can be used anywhere. If you have already installed Docker, the following command will run a simple container with the official hello-world sample image and show its output:

$ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
2db29710123e: Pull complete
...
Status: Downloaded newer image for hello-world:latest
Hello from Docker!

This preceding message shows that your installation appears to be working correctly. You can see that the hello-world image is “pulled” from a repository. This defaults to the public Docker Hub repositories at https://hub.docker.com/. We will discuss repositories, and in particular, AWS Elastic Container Registry (ECR) in Chapter 11, Building Applications and Pushing Them to Amazon ECR.

Important note

If you would like to know how to install and run with Docker, you can refer to the Get Started guide in the Docker official documentation: https://docs.docker.com/get-started/.

Meanwhile, you can use the following command to list containers on your host:

$ docker ps -a
CONTAINER ID   IMAGE  COMMAND      CREATED       STATUS  PORTS     NAMES
39bad0810900   hello-world
"/hello"                  10 minutes ago   Exited (0) 10 minutes ago             distracted_tereshkova
...

Although the preceding commands are simple, they demonstrate how easy it is to build and run containers. When you use the Docker CLI (client), it will interact with the runtime engine, which is the Docker daemon. When the daemon receives the request from the CLI, the Docker daemon proceeds with the corresponding action. In the docker run example, this means creating a container from the hello-world image. If the image is stored on your machine, it will use that; otherwise, it will try and pull the image from a public Docker repository such as Docker Hub.

As discussed in the previous section, Docker now leverages containerd and runc. You can use the docker info command to view the versions of these components:

$ docker info
…
  buildx: Docker Buildx (Docker Inc., v0.8.1)
  compose: Docker Compose (Docker Inc., v2.3.3)
  scan: Docker Scan (Docker Inc., v0.17.0)
……
containerd version: 2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc
 runc version: v1.0.3-0-gf46b6ba
 init version: de40ad0
...

In this section, we looked at the underlying technology used in Linux to support containers. In the following sections, we will look at container orchestration and Kubernetes in more detail.