Book Image

Continuous Delivery with Docker and Jenkins, 3rd Edition - Third Edition

By : Rafał Leszko
Book Image

Continuous Delivery with Docker and Jenkins, 3rd Edition - Third Edition

By: Rafał Leszko

Overview of this book

This updated third edition of Continuous Delivery with Docker and Jenkins will explain the advantages of combining Jenkins and Docker to improve the continuous integration and delivery process of app development. You’ll start by setting up a Docker server and configuring Jenkins on it. Next, you’ll discover steps for building applications and microservices on Dockerfiles and integrating them with Jenkins using continuous delivery processes such as continuous integration, automated acceptance testing, configuration management, and Infrastructure as Code. Moving ahead, you'll learn how to ensure quick application deployment with Docker containers, along with scaling Jenkins using Kubernetes. Later, you’ll explore how to deploy applications using Docker images and test them with Jenkins. Toward the concluding chapters, the book will focus on missing parts of the CD pipeline, such as the environments and infrastructure, application versioning, and non-functional testing. By the end of this continuous integration and continuous delivery book, you’ll have gained the skills you need to enhance the DevOps workflow by integrating the functionalities of Docker and Jenkins.
Table of Contents (16 chapters)
1
Section 1 – Setting Up the Environment
5
Section 2 – Architecting and Testing an Application
9
Section 3 – Deploying an Application

Advanced Kubernetes

Kubernetes provides a way to dynamically modify your deployment during runtime. This is especially important if your application is already running in production and you need to support zero-downtime deployments. First, let's look at how to scale up an application and then present the general approach Kubernetes takes on any deployment changes.

Scaling an application

Let's imagine that our Calculator application is getting popular. People have started using it and the traffic is so high that the three Pod replicas are overloaded. What can we do now?

Luckily, kubectl provides a simple way to scale up and down deployments using the scale keyword. Let's scale our Calculator deployment to 5 instances:

$ kubectl scale --replicas 5 deployment calculator-deployment

That's it, our application is now scaled up:

$ kubectl get pods
NAME                 ...