Book Image

Kubernetes - A Complete DevOps Cookbook

By : Murat Karslioglu
Book Image

Kubernetes - A Complete DevOps Cookbook

By: Murat Karslioglu

Overview of this book

Kubernetes is a popular open source orchestration platform for managing containers in a cluster environment. With this Kubernetes cookbook, you’ll learn how to implement Kubernetes using a recipe-based approach. The book will prepare you to create highly available Kubernetes clusters on multiple clouds such as Amazon Web Services (AWS), Google Cloud Platform (GCP), Azure, Alibaba, and on-premises data centers. Starting with recipes for installing and configuring Kubernetes instances, you’ll discover how to work with Kubernetes clients, services, and key metadata. You’ll then learn how to build continuous integration/continuous delivery (CI/CD) pipelines for your applications, and understand various methods to manage containers. As you advance, you’ll delve into Kubernetes' integration with Docker and Jenkins, and even perform a batch process and configure data volumes. You’ll get to grips with methods for scaling, security, monitoring, logging, and troubleshooting. Additionally, this book will take you through the latest updates in Kubernetes, including volume snapshots, creating high availability clusters with kops, running workload operators, new inclusions around kubectl and more. By the end of this book, you’ll have developed the skills required to implement Kubernetes in production and manage containers proficiently.
Table of Contents (12 chapters)

Configuring a Kubernetes cluster on Google Cloud Platform

This section will take you through step-by-step instructions to configure Kubernetes clusters on GCP. You will learn how to run a hosted Kubernetes cluster without needing to provision or manage master and etcd instances using GKE.

Getting ready

All the operations mentioned here require a GCP account with billing enabled. If you don't have one already, go to and create an account.

On Google Cloud Platform (GCP), you have two main options when it comes to running Kubernetes. You can consider using Google Compute Engine (GCE) if you'd like to manage your deployment completely and have specific powerful instance requirements. Otherwise, it's highly recommended to use the managed Google Kubernetes Engine (GKE).

How to do it…

This section is further divided into the following subsections to make this process easier to follow:

  • Installing the command-line tools to configure GCP services
  • Provisioning a managed Kubernetes cluster on GKE
  • Connecting to GKE clusters

Installing the command-line tools to configure GCP services

In this recipe, we will get the primary CLI for Google Cloud Platform, gcloud, installed so that we can configure GCP services:

  1. Run the following command to download the gcloud CLI:

$ curl | bash
  1. Initialize the SDK and follow the instructions given:
$ gcloud init
  1. During the initialization, when asked, select either an existing project that you have permissions for or create a new project.
  2. Enable the Compute Engine APIs for the project:
$ gcloud services enable
Operation "operations/acf.07e3e23a-77a0-4fb3-8d30-ef20adb2986a" finished successfully.
  1. Set a default zone:
$ gcloud config set compute/zone us-central1-a
  1. Make sure you can start up a GCE instance from the command line:
$ gcloud compute instances create "devops-cookbook" \
--zone "us-central1-a" --machine-type "f1-micro"
  1. Delete the test VM:
$ gcloud compute instances delete "devops-cookbook"

If all the commands are successful, you can provision your GKE cluster.

Provisioning a managed Kubernetes cluster on GKE

Let's perform the following steps:

  1. Create a cluster:
$ gcloud container clusters create k8s-devops-cookbook-1 \
--cluster-version latest --machine-type n1-standard-2 \
--image-type UBUNTU --disk-type pd-standard --disk-size 100 \
--no-enable-basic-auth --metadata disable-legacy-endpoints=true \
--scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
--num-nodes "3" --enable-stackdriver-kubernetes \
--no-enable-ip-alias --enable-autoscaling --min-nodes 1 \

--max-nodes 5 --enable-network-policy \
--addons HorizontalPodAutoscaling,HttpLoadBalancing \
--enable-autoupgrade --enable-autorepair --maintenance-window "10:00"

Cluster creation will take 5 minutes or more to complete.

Connecting to Google Kubernetes Engine (GKE) clusters

To get access to your GKE cluster, you need to follow these steps:

  1. Configure kubectl to access your k8s-devops-cookbook-1 cluster:
$ gcloud container clusters get-credentials k8s-devops-cookbook-1
  1. Verify your Kubernetes cluster:
$ kubectl get nodes

Now, you have a three-node GKE cluster up and running.

How it works…

This recipe showed you how to quickly provision a GKE cluster using some default parameters.

In Step 1, we created a cluster with some default parameters. While all of the parameters are very important, I want to explain some of them here.

--cluster-version sets the Kubernetes version to use for the master and nodes. Only use it if you want to use a version that's different from the default. To get the available version information, you can use the gcloud container get-server-config command.

We set the instance type by using the --machine-type parameter. If it's not set, the default is n1-standard-1. To get the list of predefined types, you can use the gcloud compute machine-types list command.

The default image type is COS, but my personal preference is Ubuntu, so I used --image-type UBUNTU to set the OS image to UBUNTU. If this isn't set, the server picks the default image type, that is, COS. To get the list of available image types, you can use the gcloud container get-server-config command.

GKE offers advanced cluster management features and comes with the automatic scaling of node instances, auto-upgrade, and auto-repair to maintain node availability. --enable-autoupgrade enables the GKE auto-upgrade feature for cluster nodes and --enable-autorepair enables the automatic repair feature, which is started at the time defined with the --maintenance-window parameter. The time that's set here is the UTC time zone and must be in HH:MM format.

There's more…

The following are some of the alternative methods that can be employed besides the recipe described in the previous section:

  • Using Google Cloud Shell
  • Deploying with a custom network configuration
  • Deleting your cluster
  • Viewing the Workloads dashboard

Using Google Cloud Shell

As an alternative to your Linux workstation, you can get a CLI interface on your browser to manage your cloud instances.

Go to to get a Google Cloud Shell.

Deploying with a custom network configuration

The following steps demonstrate how to provision your cluster with a custom network configuration:

  1. Create a VPC network:
$ gcloud compute networks create k8s-devops-cookbook \
--subnet-mode custom
  1. Create a subnet in your VPC network. In our example, this is
$ gcloud compute networks subnets create kubernetes \
--network k8s-devops-cookbook --range
  1. Create a firewall rule to allow internal traffic:
$ gcloud compute firewall-rules create k8s-devops-cookbook-allow-int \
--allow tcp,udp,icmp --network k8s-devops-cookbook \
  1. Create a firewall rule to allow external SSH, ICMP, and HTTPS traffic:
$ gcloud compute firewall-rules create k8s-devops-cookbook-allow-ext \
--allow tcp:22,tcp:6443,icmp --network k8s-devops-cookbook \
  1. Verify the rules:
$ gcloud compute firewall-rules list
k8s-devops-cookbook-allow-ext k8s-devops-cookbook INGRESS 1000 tcp:22,tcp:6443,icmp False
k8s-devops-cookbook-allow-int k8s-devops-cookbook INGRESS 1000 tcp,udp,icmp False
  1. Add the --network k8s-devops-cookbook and --subnetwork kubernetes parameters to your container clusters create command and run it.

Deleting your cluster

To delete your k8s-devops-cookbook-1 cluster, use the following command:

$ gcloud container clusters delete k8s-devops-cookbook-1

This process may take a few minutes and when finished, you will get a confirmation message.

Viewing the Workloads dashboard

On GCP, instead of using the Kubernetes Dashboard application, you can use the built-in Workloads dashboard and deploy containerized applications through Google Marketplace. Follow these steps:

  1. To access the Workload dashboard from your GCP dashboard, choose your GKE cluster and click on Workloads.
  2. Click on Show system workloads to see the existing components and containers that have been deployed in the kube-system namespace.

See also