Book Image

Serverless Architectures with Kubernetes

By : Onur Yılmaz, Sathsara Sarathchandra
Book Image

Serverless Architectures with Kubernetes

By: Onur Yılmaz, Sathsara Sarathchandra

Overview of this book

Kubernetes has established itself as the standard platform for container management, orchestration, and deployment. By learning Kubernetes, you’ll be able to design your own serverless architecture by implementing the function-as-a-service (FaaS) model. After an accelerated, hands-on overview of the serverless architecture and various Kubernetes concepts, you’ll cover a wide range of real-world development challenges faced by real-world developers, and explore various techniques to overcome them. You’ll learn how to create production-ready Kubernetes clusters and run serverless applications on them. You'll see how Kubernetes platforms and serverless frameworks such as Kubeless, Apache OpenWhisk and OpenFaaS provide the tooling to help you develop serverless applications on Kubernetes. You'll also learn ways to select the appropriate framework for your upcoming project. By the end of this book, you’ll have the skills and confidence to design your own serverless applications using the power and flexibility of Kubernetes.
Table of Contents (11 chapters)
2
2. Introduction to Serverless in the Cloud

Application Migration in Kubernetes Clusters

Kubernetes distributes applications to servers and keeps them running reliably and robustly. Servers in the cluster could be VMs or bare-metal server instances with different technical specifications. Let's assume you have connected only standard VMs to your Kubernetes cluster and they are running various types of applications. If one of your upcoming data analytics libraries requires GPUs to operate faster, you need to connect servers with GPUs. Similarly, if your database application requires SSD disks for faster I/O operations, you need to connect servers with SSD access. These kinds of application requirements result in having different node pools in your cluster. Also, you need to configure the Kubernetes workload to run on the particular nodes. In addition to marking some nodes reserved for special types of workloads, taints are used. Similarly, pods are marked with tolerations if they are running specific types of workloads....