Book Image

Mastering Kubernetes - Fourth Edition

By : Gigi Sayfan
3.3 (3)
Book Image

Mastering Kubernetes - Fourth Edition

3.3 (3)
By: Gigi Sayfan

Overview of this book

The fourth edition of the bestseller Mastering Kubernetes includes the most recent tools and code to enable you to learn the latest features of Kubernetes 1.25. This book contains a thorough exploration of complex concepts and best practices to help you master the skills of designing and deploying large-scale distributed systems on Kubernetes clusters. You’ll learn how to run complex stateless and stateful microservices on Kubernetes, including advanced features such as horizontal pod autoscaling, rolling updates, resource quotas, and persistent storage backends. In addition, you’ll understand how to utilize serverless computing and service meshes. Further, two new chapters have been added. “Governing Kubernetes” covers the problem of policy management, how admission control addresses it, and how policy engines provide a powerful governance solution. “Running Kubernetes in Production” shows you what it takes to run Kubernetes at scale across multiple cloud providers, multiple geographical regions, and multiple clusters, and it also explains how to handle topics such as upgrades, capacity planning, dealing with cloud provider limits/quotas, and cost management. By the end of this Kubernetes book, you’ll have a strong understanding of, and hands-on experience with, a wide range of Kubernetes capabilities.
Table of Contents (21 chapters)
19
Other Books You May Enjoy
20
Index

What this book covers

Chapter 1, Understanding Kubernetes Architecture, in this chapter, we will build together the foundation necessary to utilize Kubernetes to its full potential. We will start by understanding what Kubernetes is, what Kubernetes isn’t, and what container orchestration means exactly. Then we will cover important Kubernetes concepts that will form the vocabulary we will use throughout the book.

Chapter 2, Creating Kubernetes Clusters, in this chapter, we will roll up our sleeves and build some Kubernetes clusters using minikube, KinD, and k3d. We will discuss and evaluate other tools such as kubeadm and Kubespray. We will also look into deployment environments such as local, cloud, and bare metal.

Chapter 3, High Availability and Reliability, in this chapter, we will dive into the topic of highly available clusters. This is a complicated topic. The Kubernetes project and the community haven’t settled on one true way to achieve high-availability nirvana.

There are many aspects to highly available Kubernetes clusters, such as ensuring that the control plane can keep functioning in the face of failures, protecting the cluster state in etcd, protecting the system’s data, and recovering capacity and/or performance quickly. Different systems will have different reliability and availability requirements.

Chapter 4, Securing Kubernetes, in this chapter, we will explore the important topic of security. Kubernetes clusters are complicated systems composed of multiple layers of interacting components. The isolation and compartmentalization of different layers is very important when running critical applications. To secure the system and ensure proper access to resources, capabilities, and data, we must first understand the unique challenges facing Kubernetes as a general-purpose orchestration platform that runs unknown workloads. Then we can take advantage of various securities, isolation, and access control mechanisms to make sure the cluster, the applications running on it, and the data are all safe. We will discuss various best practices and when it is appropriate to use each mechanism.

Chapter 5, Using Kubernetes Resources in Practice, in this chapter, we will design a fictional massive-scale platform that will challenge Kubernetes’ capabilities and scalability. The Hue platform is all about creating an omniscient and omnipotent digital assistant. Hue is a digital extension of you. Hue will help you do anything, find anything, and, in many cases, will do a lot on your behalf. It will obviously need to store a lot information, integrate with many external services, respond to notifications and events, and be smart about interacting with you.

Chapter 6, Managing Storage, in this chapter, we’ll look at how Kubernetes manages storage. Storage is very different from computation, but at a high level, they are both resources. Kubernetes as a generic platform takes the approach of abstracting storage behind a programming model and a set of plugins for storage providers.

Chapter 7, Running Stateful Applications with Kubernetes, in this chapter, we will learn how to run stateful applications on Kubernetes. Kubernetes takes a lot of work out of our hands by automatically starting and restarting pods across the cluster nodes as needed, based on complex requirements and configurations such as namespaces, limits, and quotas. But when pods run storage-aware software, such as databases and queues, relocating a pod can cause the system to break.

Chapter 8, Deploying and Updating Applications, in this chapter, we will explore the automated pod scalability that Kubernetes provides, how it affects rolling updates, and how it interacts with quotas. We will touch on the important topic of provisioning and how to choose and manage the size of the cluster. Finally, we will go over how the Kubernetes team improved the performance of Kubernetes and how they test the limits of Kubernetes with the Kubemark tool.

Chapter 9, Packaging Applications, in this chapter, we are going to look into Helm, the Kubernetes package manager. Every successful and non-trivial platform must have a good packaging system. Helm was developed by Deis (acquired by Microsoft on April 4, 2017) and later contributed to the Kubernetes project directly. It became a CNCF project in 2018. We will start by understanding the motivation for Helm, its architecture, and its components.

Chapter 10, Exploring Kubernetes Networking, in this chapter, we will examine the important topic of networking. Kubernetes, as an orchestration platform, manages containers/pods running on different machines (physical or virtual) and requires an explicit networking model.

Chapter 11, Running Kubernetes on Multiple Clusters, in this chapter, we’ll take it to the next level, by running Kubernetes on multiple clouds and clusters. A Kubernetes cluster is a closely knit unit where all the components run in relative proximity and are connected by a fast network (typically a physical data center or cloud provider availability zone). This is great for many use cases, but there are several important use cases where systems need to scale beyond a single cluster.

Chapter 12, Serverless Computing on Kubernetes, in this chapter, we will explore the fascinating world of serverless computing in the cloud. The term “serverless” is getting a lot of attention, but it is a misnomer used to describe two different paradigms. A true serverless application runs as a web application in the user’s browser or a mobile app and only interacts with external services. The types of serverless systems we build on Kubernetes are different.

Chapter 13, Monitoring Kubernetes Clusters, in this chapter, we’re going to talk about how to make sure your systems are up and running and performing correctly and how to respond when they aren’t. In Chapter 3, High Availability and Reliability, we discussed related topics. The focus here is about knowing what’s going on in your system and what practices and tools you can use.

Chapter 14, Utilizing Service Meshes, in this chapter, we will learn how service meshes allow you to externalize cross-cutting concerns like monitoring and observability from the application code. The service mesh is a true paradigm shift in the way you can design, evolve, and operate distributed systems on Kubernetes. I like to think of it as aspect-oriented programming for cloud-native distributed systems.

Chapter 15, Extending Kubernetes, in this chapter, we will dig deep into the guts of Kubernetes. We will start with the Kubernetes API and learn how to work with Kubernetes programmatically via direct access to the API, the Go client, and automating kubectl. Then we’ll look into extending the Kubernetes API with custom resources. The last part is all about the various plugins Kubernetes supports. Many aspects of Kubernetes operation are modular and designed for extension. We will examine the API aggregation layer and several types of plugins, such as custom schedulers, authorization, admission control, custom metrics, and volumes. Finally, we’ll look into extending kubectl and adding your own commands.

Chapter 16, Governing Kubernetes, in this chapter, we will learn about the growing role of Kubernetes in large enterprise organizations, as well as what governance is and how it is applied in Kubernetes. We will look at policy engines, review some popular policy engines, and then dive deep into Kyverno.

Chapter 17, Running Kubernetes in Production, in this chapter, we will turn our attention to the overall management of Kubernetes in production. The focus will be on running multiple managed Kubernetes clusters in the cloud. There are so many different aspects to running Kubernetes in production. We will focus mostly on the compute side, which is the most fundamental and dominant.

Chapter 18, The Future of Kubernetes, in this chapter, we’ll look at the future of Kubernetes from multiple angles. We’ll start with the momentum of Kubernetes since its inception, across dimensions such as community, ecosystem, and mindshare. Spoiler alert: Kubernetes won the container orchestration wars by a landslide. As Kubernetes grows and matures, the battle lines shift from beating competitors to fighting against its own complexity. Usability, tooling, and education will play a major role as container orchestration is still new, fast-moving, and not a well-understood domain. Then we will take a look at some very interesting patterns and trends.