Book Image

Kubernetes Cookbook - Second Edition

By : Hideto Saito, Hui-Chuan Chloe Lee, Ke-Jou Carol Hsu
Book Image

Kubernetes Cookbook - Second Edition

By: Hideto Saito, Hui-Chuan Chloe Lee, Ke-Jou Carol Hsu

Overview of this book

Kubernetes is an open source orchestration platform to manage containers in a cluster environment. With Kubernetes, you can configure and deploy containerized applications easily. This book gives you a quick brush up on how Kubernetes works with containers, and an overview of main Kubernetes concepts, such as Pods, Deployments, Services and etc. This book explains how to create Kubernetes clusters and run applications with proper authentication and authorization configurations. With real-world recipes, you'll learn how to create high availability Kubernetes clusters on AWS, GCP and in on-premise datacenters with proper logging and monitoring setup. You'll also learn some useful tips about how to build a continuous delivery pipeline for your application. Upon completion of this book, you will be able to use Kubernetes in production and will have a better understanding of how to manage containers using Kubernetes.
Table of Contents (11 chapters)

Setting up the Kubernetes cluster on Linux via kubeadm

In this recipe, we are going to show how to create a Kubernetes cluster along with kubeadm (https://github.com/kubernetes/kubeadm) on Linux servers. Kubeadm is a command-line tool that simplifies the procedure of creating and managing a Kubernetes cluster. Kubeadm leverages the fast deployment feature of Docker, running the system services of the Kubernetes master and the etcd server as containers. When triggered by the kubeadm command, the container services will contact kubelet on the Kubernetes node directly; kubeadm also checks whether every component is healthy. Through the kubeadm setup steps, you can avoid having a bunch of installation and configuration commands when you build everything from scratch.

Getting ready

We will provide instructions of two types of OS:

  • Ubuntu Xenial 16.04 (LTS)
  • CentOS 7.4

Make sure the OS version is matched before continuing. Furthermore, the software dependency and network settings should be also verified before you proceed to thecd cd next step. Check the following items to prepare the environment:

  • Every node has a unique MAC address and product UUID: Some plugins use the MAC address or product UUID as a unique machine ID to identify nodes (for example, kube-dns). If they are duplicated in the cluster, kubeadm may not work while starting the plugin:
// check MAC address of your NIC
$ ifconfig -a
// check the product UUID on your host
$ sudo cat /sys/class/dmi/id/product_uuid
  • Every node has a different hostname: If the hostname is duplicated, the Kubernetes system may collect logs or statuses from multiple nodes into the same one.
  • Docker is installed: As mentioned previously, the Kubernetes master will run its daemon as a container, and every node in the cluster should get Docker installed. For how to perform the Docker installation, you can follow the steps on the official website: (Ubuntu: https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/, and CentOS: https://docs.docker.com/engine/installation/linux/docker-ce/centos/) Here we have Docker CE 17.06 installed on our machines; however, only Docker versions 1.11.2 to 1.13.1, and 17.03.x are verified with Kubernetes version 1.10.
  • Network ports are available: The Kubernetes system services need network ports for communication. The ports in the following table should now be occupied according to the role of the node:
Node role Ports System service
Master 6443 Kubernetes API server
10248/10250/10255 kubelet local healthz endpoint/Kubelet API/Heapster (read-only)
10251 kube-scheduler
10252 kube-controller-manager
10249/10256 kube-proxy
2379/2380 etcd client/etcd server communication
Node 10250/10255 Kubelet API/Heapster (read-only)
30000~32767 Port range reserved for exposing container service to outside world
  • The Linux command, netstat, can help to check if the port is in use or not:

// list every listening port
$ sudo netstat -tulpn | grep LISTEN
  • Network tool packages are installed. ethtool and ebtables are two required utilities for kubeadm. They can be download and installed by theapt-get or yumpackage managing tools.

How to do it...

The installation procedures for two Linux OSes, Ubuntu and CentOS, are going to be introduced separately in this recipe as they have different setups.

Package installation

Let's get the Kubernetes packages first! The repository for downloading needs to be set in the source list of the package management system. Then, we are able to get them installed easily through the command-line.

Ubuntu

To install Kubernetes packages in Ubuntu perform the following steps:

  1. Some repositories are URL with HTTPS. The apt-transport-https package must be installed to access the HTTPS endpoint:
$ sudo apt-get update && sudo apt-get install -y apt-transport-https
  1. Download the public key for accessing packages on Google Cloud, and add it as follows:
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
OK
  1. Next, add a new source list for the Kubernetes packages:
$ sudo bash -c 'echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list'
  1. Finally, it is good to install the Kubernetes packages:
// on Kubernetes master
$ sudo apt-get update && sudo apt-get install -y kubelet kubeadm kubectl
// on Kubernetes node

$ sudo apt-get update && sudo apt-get install -y kubelet

CentOS

To install Kubernetes packages in CentOS perform the following steps:

  1. As with Ubuntu, new repository information needs to be added:
$ sudo vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
  1. Now, we are ready to pull the packages from the Kubernetes source base via the yum command:
// on Kubernetes master
$ sudo yum install -y kubelet kubeadm kubectl
// on Kubernetes node
$ sudo yum install -y kubelet
  1. No matter what OS it is, check the version of the package you get!
// take it easy! server connection failed since there is not server running
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:22:21Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server 192.168.122.101:6443 was refused - did you specify the right host or port?

System configuration prerequisites

Before running up the whole system by kubeadm, please check that Docker is running on your machine for Kubernetes. Moreover, in order to avoid critical errors while executing kubeadm, we will show the necessary service configuration on both the system and kubelet. As well as the master, please set the following configurations on the Kubernetes nodes to get kubelet to work fine with kubeadm.

CentOS system settings

There are other additional settings in CentOS to make Kubernetes behave correctly. Be aware that, even if we are not using kubeadm to manage the Kubernetes cluster, the following setup should be considered while running kubelet:

  1. Disable SELinux, since kubelet does not support SELinux completely:
// check the state of SELinux, if it has already been disabled, bypass below commands
$ sestatus

We can disable SELinux through the following command, or by modifying the configuration file:

// disable SELinux through command
$ sudo setenforce 0
// or modify the configuration file
$ sudo sed –I 's/ SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

Then we'll need to reboot the machine:

// reboot is required
$ sudo reboot
  1. Enable the usage of iptables. To prevent some routing errors happening, add runtime parameters:
// enable the parameters by setting them to 1
$ sudo bash -c 'echo "net.bridge.bridge-nf-call-ip6tables = 1" > /etc/sysctl.d/k8s.conf'
$ sudo bash -c 'echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.d/k8s.conf'
// reload the configuration
$ sudo sysctl --system

Booting up the service

Now we can start the service. First enable and then start kubelet on your Kubernetes master machine:

$ sudo systemctl enable kubelet && sudo systemctl start kubelet

While checking the status of kubelet, you may be worried to see the status displaying activating (auto-restart); and you may get further frustrated to see the detail logs by the journalctl command, as follows:

error: unable to load client CA file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory

Don't worry. kubeadm takes care of creating the certificate authorities file. It is defined in the service configuration file, /etc/systemd/system/kubelet.service.d/10-kubeadm.conf by argument KUBELET_AUTHZ_ARGS. The kubelet service won't be a healthy without this file, so keep trying to restart the daemon by itself.

Go ahead and start all the master daemons via kubeadm. It is worth noting that using kubeadm requires the root permission to achieve a service level privilege. For any sudoer, each kubeadm would go after the sudo command:

$ sudo kubeadm init
Find preflight checking error while firing command kubeadm init? Using following one to disable running swap as description.

$ sudo kubeadm init --ignore-preflight-errors=Swap

And you will see the sentence Your Kubernetes master has initialized successfully! showing on the screen. Congratulations! You are almost done! Just follow the information about the user environment setup below the greeting message:

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

The preceding commands ensure every Kubernetes instruction is fired by your account execute with the proper credentials and connects to the correct server portal:

// Your kubectl command works great now
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:22:21Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:10:24Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

More than that, kubelet goes into a healthy state now:

// check the status of kubelet
$ sudo systemctl status kubelet
...
Active: active (running) Mon 2018-04-30 18:46:58 EDT;
2min 43s ago
...

Network configurations for containers

After the master of the cluster is ready to handle jobs and the services are running, for the purpose of making containers accessible to each other through networking, we need to set up the network for container communication. It is even more important initially while building up a Kubernetes cluster with kubeadm, since the master daemons are all running as containers. kubeadm supports the CNI (https://github.com/containernetworking/cni). We are going to attach the CNI via a Kubernetes network add-on.

There are many third-party CNI solutions that supply secured and reliable container network environments. Calico (https://www.projectcalico.org), one CNI provide stable container networking. Calico is light and simple, but still well implemented by the CNI standard and integrated with Kubernetes:

$ kubectl apply -f https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml

Here, whatever your host OS is, the command kubectl can fire any sub command for utilizing resources and managing systems. We use kubectl to apply the configuration of Calico to our new-born Kubernetes.

More advanced management of networking and Kubernetes add-ons will be discussed in Chapter 7, Building Kubernetes on GCP.

Getting a node involved

Let's log in to your Kubernetes node to join the group controlled by kubeadm:

  1. First, enable and start the service, kubelet. Every Kubernetes machine should have kubelet running on it:
$ sudo systemctl enable kubelet && sudo systemctl start kubelet
  1. After that, fire the kubeadm join command with an input flag token and the IP address of the master, notifying the master that it is a secured and authorized node. You can get the token on the master node via the kubeadm command:
// on master node, list the token you have in the cluster
$ sudo kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
da3a90.9a119695a933a867 6h 2018-05-01T18:47:10-04:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
  1. In the preceding output, if kubeadm init succeeds, the default token will be generated. Copy the token and paste it onto the node, and then compose the following command:
// The master IP is 192.168.122.101, token is da3a90.9a119695a933a867, 6443 is the port of api server.
$ sudo kubeadm join --token da3a90.9a119695a933a867 192.168.122.101:6443 --discovery-token-unsafe-skip-ca-verification
What if you call kubeadm token list to list the tokens, and see they are all expired? You can create a new one manually by this command: kubeadm token create .
  1. Please make sure that the master's firewall doesn't block any traffic to port 6443, which is for API server communication. Once you see the words Successfully established connection showing on the screen, it is time to check with the master if the group got the new member:
// fire kubectl subcommand on master
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ubuntu01 Ready master 11h v1.10.2
ubuntu02 Ready <none> 26s v1.10.2

Well done! No matter if whether your OS is Ubuntu or CentOS, kubeadm is installed and kubelet is running. You can easily go through the preceding steps to build your Kubernetes cluster.

You may be wondering about the flag discovery-token-unsafe-skip-ca-verification used while joining the cluster. Remember the kubelet log that says the certificate file is not found? That's it, since our Kubernetes node is brand new and clean, and has never connected with the master before. There is no certificate file to find for verification. But now, because the node has shaken hands with the master, the file exists. We may join in this way (in some situation requiring rejoining the same cluster):

kubeadm join --token $TOKEN $MASTER_IPADDR:6443 --discovery-token-ca-cert-hash sha256:$HASH

The hash value can be obtained by the openssl command:

// rejoining the same cluster
$ HASH=$(openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //')
$ sudo kubeadm join --token da3a90.9a119695a933a867 192.168.122.101:6443 --discovery-token-ca-cert-hash sha256:$HASH

How it works...

When kubeadm init sets up the master, there are six stages:

  1. Generating certificate files and keys for services: Certificated files and keys are used for security management during cross-node communications. They are located in the /etc/kubernetes/pki directory. Take kubelet, for example. It cannot access the Kubernetes API server without passing the identity verification.
  2. Writing kubeconfig files: The kubeconfig files define permissions, authentication, and configurations for kubectl actions. In this case, the Kubernetes controller manager and scheduler have related kubeconfig files to fulfill any API requests.

  1. Creating service daemon YAML files: The service daemons under kubeadm's control are just like computing components running on the master. As with setting deployment configurations on disk, kubelet will make sure each daemon is active.
  2. Waiting for kubelet to be alive, running the daemons as pods: When kubelet is alive, it will boot up the service pods described in the files under the /etc/kubernetes/manifests directory. Moreover, kubelet guarantees to keep them activated, restarting the pod automatically if it crashes.
  3. Setting post-configuration for the cluster: Some cluster configurations still need to be set, such as configuring role-based accessing control (RBAC) rules, creating a namespace, and tagging the resources.
  4. Applying add-ons: DNS and proxy services can be added along with the kubeadm system.

While the user enters kubeadm and joins the Kubernetes node, kubeadm will complete the first two stages like the master.

If you have faced a heavy and complicated set up procedure in earlier versions of Kubernetes, it is quite a relief to set up a Kubernetes cluster with kubeadm. kubeadm reduces the overhead of configuring each daemon and starting them one by one. Users can still do customization on kubelet and master services, by just modifying a familiar file, 10-kubeadm.conf and the YAML files under /etc/kubernetes/manifests. Kubeadm not only helps to establish the cluster but also enhances security and availability, saving you time.

See also

We talked about how to build a Kubernetes cluster. If you're ready to run your first application on it, check the last recipe in this chapter and run the container! And for advanced management of your cluster, you can also look at Chapter 8, Advanced Cluster Administration, of this book:

  • Advanced settings in kubeconfig, in Chapter 8, Advanced Cluster Administration