Book Image

Getting Started with Kubernetes - Third Edition

By : Jonathan Baier, Jesse White
Book Image

Getting Started with Kubernetes - Third Edition

By: Jonathan Baier, Jesse White

Overview of this book

Kubernetes has continued to grow and achieve broad adoption across various industries, helping you to orchestrate and automate container deployments on a massive scale. Based on the recent release of Kubernetes 1.12, Getting Started with Kubernetes gives you a complete understanding of how to install a Kubernetes cluster. The book focuses on core Kubernetes constructs, such as pods, services, replica sets, replication controllers, and labels. You will understand cluster-level networking in Kubernetes, and learn to set up external access to applications running in the cluster. As you make your way through the book, you'll understand how to manage deployments and perform updates with minimal downtime. In addition to this, you will explore operational aspects of Kubernetes , such as monitoring and logging, later moving on to advanced concepts such as container security and cluster federation. You'll get to grips with integrating your build pipeline and deployments within a Kubernetes cluster, and be able to understand and interact with open source projects. In the concluding chapters, you'll orchestrate updates behind the scenes, avoid downtime on your cluster, and deal with underlying cloud provider instability within your cluster. By the end of this book, you'll have a complete understanding of the Kubernetes platform and will start deploying applications on it.
Table of Contents (23 chapters)
Title Page
Dedication
Packt Upsell
Contributors
Preface
Index

The advantages of Continuous Integration/Continuous Deployment


ThoughtWorks defines Continuous Integration as a development practice that requires developers to integrate code into a shared repository several times a day. By having a continuous process of building and deploying code, organizations are able to instill quality control and testing as part of the everyday work cycle. The result is that updates and bug fixes happen much faster and the overall quality improves.

However, there has always been a challenge in creating development environments that match those of testing and production. Often, inconsistencies in these environments make it difficult to gain the full advantage of Continuous Delivery. Continuous Integration is the first step in speeding up your organization's software delivery life cycle, which helps you get your software features in front of customer quickly and reliably.

The concept of Continuous Delivery/Deployment uses Continuous Integration to enables developers to have truly portable deployments. Containers that are deployed on a developer's laptop are easily deployed on an in-house staging server. They are then easily transferred to the production server running in the cloud. This is facilitated due to the nature of containers, which build files that specify parent layers, as we discussed previously. One advantage of this is that it becomes very easy to ensure OS, package, and application versions are the same across development, staging, and production environments. Because all the dependencies are packaged into the layer, the same host server can have multiple containers running a variety of OS or package versions. Furthermore, we can have various languages and frameworks on the same host server without the typical dependency clashes we would get in a VM with a single operating system.

This sets the stage for Continuous Delivery/Deployment of the application, as the operations teams or the developers themselves can focus on getting deployments and application rollouts correct, without having to worry about the intricacies of dependencies.

Continuous Delivery is the embodiment and process wherein all code changes are automatically built, tested (Continuous Integration), and then released into production (Continuous Delivery). If this process captures the correct quality gates, security guarantees, and unit/integration/system tests, the development teams will constantly release production-ready and deployable artifacts that have moved through an automated and standardized process.

It's important to note that CD requires the engineering teams to automate more than just unit tests. In order to utilize CD in sophisticated scheduling and orchestration systems such as Kubernetes, teams need to verify application functionality across many dimensions before they're deployed to customers. We'll explore deployment strategies that Kubernetes has to offer in later chapters.

Lastly, it's important to keep in mind that utilizing Kubernetes with CI/CD reduces the risk of the many common problems that technology firms face:

  • Long release cycles: If it takes a long time to release code to your users, then it's a potential functionality that they're missing out on, and this results in lost revenue. If you have a manual testing or release process, it's going to slow down getting changes to production, and therefore in front of your customers.
  • Fixing code is hard: When you shorten the release cycle, you're able to discover and remediate bugs closer to the point of creation. This lowers the fixed cost, as there's a correlation between bug introduction and bug discovery times.
  • Release better: The more you release, the better you get at releasing. Challenging your developers and operators to build automation, monitoring, and logging around the processes of CI/CD will make your pipeline more robust. As you release more often, the amount of difference between releases also decreases. A smaller difference allows teams to troubleshoot potential breaking changes more quickly, which in turn gives them more time to refine the release process further. It's a virtuous cycle!

Because all the dependencies are packaged into the layer, the same host server can have multiple containers running a variety of OS or package versions. Furthermore, we can have various languages and frameworks on the same host server without the typical dependency clashes we would get in a VM with a single operating system.

Resource utilization

The well-defined isolation and layer filesystem also makes containers ideal for running systems with a very small footprint and domain-specific purpose. A streamlined deployment and release process means we can deploy quickly and often. As such, many companies have reduced their deployment time from weeks or months to days and hours in some cases. This development life cycle lends itself extremely well to small, targeted teams working on small chunks of a larger application.