Book Image

The Kubernetes Bible

By : Nassim Kebbani, Piotr Tylenda, Russ McKendrick
4 (3)
Book Image

The Kubernetes Bible

4 (3)
By: Nassim Kebbani, Piotr Tylenda, Russ McKendrick

Overview of this book

With its broad adoption across various industries, Kubernetes is helping engineers with the orchestration and automation of container deployments on a large scale, making it the leading container orchestration system and the most popular choice for running containerized applications. This Kubernetes book starts with an introduction to Kubernetes and containerization, covering the setup of your local development environment and the roles of the most important Kubernetes components. Along with covering the core concepts necessary to make the most of your infrastructure, this book will also help you get acquainted with the fundamentals of Kubernetes. As you advance, you'll learn how to manage Kubernetes clusters on cloud platforms, such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP), and develop and deploy real-world applications in Kubernetes using practical examples. Additionally, you'll get to grips with managing microservices along with best practices. By the end of this book, you'll be equipped with battle-tested knowledge of advanced Kubernetes topics, such as scheduling of Pods and managing incoming traffic to the cluster, and be ready to work with Kubernetes on cloud platforms.
Table of Contents (28 chapters)
1
Section 1: Introducing Kubernetes
5
Section 2: Diving into Kubernetes Core Concepts
12
Section 3: Using Managed Pods with Controllers
17
Section 4: Deploying Kubernetes on the Cloud
21
Section 5: Advanced Kubernetes

How can Kubernetes help you to manage your Docker containers?

Now, we will focus a little bit more on Kubernetes, which is the purpose of this book. Here, we're going to discover that Kubernetes was meant to use container runtimes in production, by answering operational needs mandatory for production.

Understanding that Kubernetes is meant to use Docker in production

If you open the official Kubernetes website (at https://kubernetes.io), the title you will see is Production-Grade Container Orchestration:

Figure 1.5 – The Kubernetes home page showing the header and introducing Kubernetes as a production container orchestration platform

Figure 1.5 – The Kubernetes home page showing the header and introducing Kubernetes as a production container orchestration platform

These four words perfectly sum up what Kubernetes is: it is a container orchestration platform for production. Kubernetes does not aim to replace Docker nor any of the features of Docker; rather, it aims to help us to manage clusters of machines running Docker. When working with Kubernetes, you use both Kubernetes and the full-featured standard installations of Docker.

The title refers to production. Indeed, the concept of production is absolutely central to Kubernetes: it was thought and designed to answer modern production needs. Managing production workloads is different today compared to what it was in the 2000s. Back in the 2000s, your production workload would consist of just a few bare metal servers, if not a single one on-premises. These servers mostly ran monoliths directly installed on the host Linux system. However, today, thanks to public cloud platforms such as Amazon Web Services (AWS) or Google Cloud Platform (GCP), anyone can now get hundreds or even thousands of machines in the form of instances or virtual machines with just a few clicks. Even better, we no longer deploy our applications on the host system but as containerized microservices on top of the Docker engine instead, thereby reducing the footprint of the host system.

A problem will arise when you have to manage Docker installations on each of these virtual machines on the cloud. Let's imagine that you have 10 (or 100 or 1,000) machines launched on your preferred cloud and you want to achieve a very simple task: deploy a containerized Docker app on each of these machines.

You could do this by running the docker run command on each of your machines. It would work, but of course, there is a better way to do it. And that's by using a container orchestrator such as Kubernetes. To give you an extremely simplified vision of Kubernetes, it is actually a REST API that keeps a registry of your machines executing a Docker daemon.

Again, this is an extremely simplified definition of Kubernetes. In fact, it's not made of a single centralized REST API, because as you might have gathered, Kubernetes itself was built as a suite of microservices.