Book Image

Serverless Architectures with Kubernetes

By : Onur Yılmaz, Sathsara Sarathchandra
Book Image

Serverless Architectures with Kubernetes

By: Onur Yılmaz, Sathsara Sarathchandra

Overview of this book

Kubernetes has established itself as the standard platform for container management, orchestration, and deployment. By learning Kubernetes, you’ll be able to design your own serverless architecture by implementing the function-as-a-service (FaaS) model. After an accelerated, hands-on overview of the serverless architecture and various Kubernetes concepts, you’ll cover a wide range of real-world development challenges faced by real-world developers, and explore various techniques to overcome them. You’ll learn how to create production-ready Kubernetes clusters and run serverless applications on them. You'll see how Kubernetes platforms and serverless frameworks such as Kubeless, Apache OpenWhisk and OpenFaaS provide the tooling to help you develop serverless applications on Kubernetes. You'll also learn ways to select the appropriate framework for your upcoming project. By the end of this book, you’ll have the skills and confidence to design your own serverless applications using the power and flexibility of Kubernetes.
Table of Contents (11 chapters)
2
2. Introduction to Serverless in the Cloud

Google Kubernetes Engine

GKE provides a managed Kubernetes platform backed by the experience that Google has of running containerized services for more than a decade. GKE clusters are production-ready and scalable, and they support upstream Kubernetes versions. In addition, GKE focuses on improving the development experience by eliminating the installation, management, and operation needs of Kubernetes clusters.

While GKE improves developer experience, it tries to minimize the cost of running Kubernetes clusters. It only charges for the nodes in the cluster and provides a Kubernetes control plane free of charge. In other words, GKE delivers a reliable, scalable, and robust Kubernetes control plane without any cost. For the servers that run the workload of your applications, the usual GCP Compute Engine pricing is applied. For instance, let's assume that you will start with two n1-standard-1 (vCPUs: 1, RAM: 3.75 GB) nodes:

The calculation would be as follows:

1,460 total...