Book Image

The Complete Kubernetes Guide

By : Jonathan Baier, Gigi Sayfan, Jesse White
Book Image

The Complete Kubernetes Guide

By: Jonathan Baier, Gigi Sayfan, Jesse White

Overview of this book

If you are running a number of containers and want to be able to automate the way they’re managed, it can be helpful to have Kubernetes at your disposal. This Learning Path guides you through core Kubernetes constructs, such as pods, services, replica sets, replication controllers, and labels. You'll get started by learning how to integrate your build pipeline and deployments in a Kubernetes cluster. As you cover more chapters in the Learning Path, you'll get up to speed with orchestrating updates behind the scenes, avoiding downtime on your cluster, and dealing with underlying cloud provider instability in your cluster. With the help of real-world use cases, you'll also explore options for network configuration, and understand how to set up, operate, and troubleshoot various Kubernetes networking plugins. In addition to this, you'll gain insights into custom resource development and utilization in automation and maintenance workflows. By the end of this Learning Path, you'll have the expertise you need to progress from an intermediate to an advanced level of understanding Kubernetes. This Learning Path includes content from the following Packt products: • Getting Started with Kubernetes - Third Edition by Jonathan Baier and Jesse White • Mastering Kubernetes - Second Edition by Gigi Sayfan
Table of Contents (26 chapters)
Title Page
Copyright and Credits
About Packt
Contributors
Preface
Index

Application scheduling


Now that we understand how to run containers in pods and even recover from failure, it may be useful to understand how new containers are scheduled on our cluster nodes.

As mentioned earlier, the default behavior for the Kubernetes scheduler is to spread container replicas across the nodes in our cluster. In the absence of all other constraints, the scheduler will place new pods on nodes with the least number of other pods belonging to matching services or replication controllers.

Additionally, the scheduler provides the ability to add constraints based on resources available to the node. Today, this includes minimum CPU and memory allocations. In terms of Docker, these use the CPU-shares and memory limit flags under the covers.

When additional constraints are defined, Kubernetes will check a node for available resources. If a node does not meet all the constraints, it will move to the next. If no nodes can be found that meet the criteria, then we will see a scheduling...