Book Image

Serverless Architectures with Kubernetes

By : Onur Yılmaz, Sathsara Sarathchandra
Book Image

Serverless Architectures with Kubernetes

By: Onur Yılmaz, Sathsara Sarathchandra

Overview of this book

Kubernetes has established itself as the standard platform for container management, orchestration, and deployment. By learning Kubernetes, you’ll be able to design your own serverless architecture by implementing the function-as-a-service (FaaS) model. After an accelerated, hands-on overview of the serverless architecture and various Kubernetes concepts, you’ll cover a wide range of real-world development challenges faced by real-world developers, and explore various techniques to overcome them. You’ll learn how to create production-ready Kubernetes clusters and run serverless applications on them. You'll see how Kubernetes platforms and serverless frameworks such as Kubeless, Apache OpenWhisk and OpenFaaS provide the tooling to help you develop serverless applications on Kubernetes. You'll also learn ways to select the appropriate framework for your upcoming project. By the end of this book, you’ll have the skills and confidence to design your own serverless applications using the power and flexibility of Kubernetes.
Table of Contents (11 chapters)
2
2. Introduction to Serverless in the Cloud

Serverless Architecture and Function as a Service (FaaS)

Serverless is a cloud computing design where cloud providers handle the provisioning of servers. In the previous section, we discussed how operational concerns are layered and handed over. In this section, we will focus on serverless architectures and application design using serverless architecture.

In traditional software architecture, all of the components of an application are installed on servers. For instance, let's assume that you are developing an e-commerce website in Java and your product information is stored in MySQL. In this case, the frontend, backend, and database are installed on the same server. End users are expected to reach the shopping website with the IP address of the server, and thus an application server such as Apache Tomcat should be running on the server. In addition, user information and security components are also included in the package, which is installed on the server. A monolithic e-commerce application is shown in Figure 1.6, with all four parts, namely the frontend, backend, security, and database:

Figure 1.6: Traditional software architecture
Figure 1.6: Traditional software architecture

Microservices architecture focuses on creating a loosely coupled and independently deployable collection of services. For the same e-commerce system, you would still have frontend, backend, database, and security components, but they would be isolated units. Furthermore, these components would be packaged as containers and would be managed by a container orchestrator such as Kubernetes. This enables the installing and scaling of components independently since they are distributed over multiple servers. In Figure 1.7, the same four components are installed on the servers and communicating with each other via Kubernetes networking:

Figure 1.7: Microservices software architecture
Figure 1.7: Microservices software architecture

Microservices are deployed to the servers, which are still managed by the operations teams. With the serverless architecture, the components are converted into third-party services or functions. For instance, the security of the e-commerce website could be handled by an Authentication-as-a-Service offering such as Auth0. AWS Relational Database Service (RDS) can be used as the database of the system. The best option for the backend logic is to convert it into functions and deploy them into a serverless platform such as AWS Lambda or Google Cloud Functions. Finally, the frontend could be served by storage services such as AWS Simple Storage Service (S3) or Google Cloud Storage.

With a serverless design, it is only required to define these services for you to have scalable, robust, and managed applications running in harmony, as shown in Figure 1.8:

Note

Auth0 is a platform for providing authentication and authorization for web, mobile, and legacy applications. In short, it provides authentication and authorization as a service, where you can connect any application written in any language. Further details can be found on its official website: https://auth0.com.

Figure 1.8: Serverless software architecture
Figure 1.8: Serverless software architecture

Starting from a monolith architecture and first dissolving it into microservice, and then serverless components is beneficial for multiple reasons:

  • Cost: Serverless architecture helps to decrease costs in two critical ways. The first is that the management of the servers is outsourced, and the second is that it only costs money when the serverless applications are in use.
  • Scalability: If an application is expected to grow, the current best choice is to design it as a serverless application since that removes the scalability constraints related to the infrastructure.
  • Flexibility: When the scope of deployable units is decreased, serverless provides more flexibility to innovate, choose better programming languages, and manage with smaller teams.

These dimensions and how they vary between software architectures is visualized in Figure 1.9:

Figure 1.9: Benefits of the transition from cost to serverless
Figure 1.9: Benefits of the transition from cost to serverless

When you start with a traditional software development architecture, the transition to microservices increases scalability and flexibility. However, it does not directly decrease the cost of running the applications since you are still dealing with the servers. Further transition to serverless improves both scalability and flexibility while decreasing the cost. Therefore, it is essential to learn about and implement serverless architectures for future-proof applications. In the following section, the implementation of serverless architecture, namely Function as a Service (FaaS), will be presented.

Function as a Service (FaaS)

FaaS is the most popular and widely adopted implementation of serverless architecture. All major cloud providers have FaaS products, such as AWS Lambda, Google Cloud Functions, and Azure Functions. As its name implies, the unit of deployment and management in FaaS is the function. Functions in this context are no different from any other function in any other programming language. They are expected to take some arguments and return values to implement business needs. FaaS platforms handle the management of servers and make it possible to run event-driven, scalable functions. The essential properties of a FaaS offering are these:

  • Stateless: Functions are designed to be stateless and ephemeral operations where no file is saved to disk and no caches are managed. At every invocation of a function, it starts quickly with a new environment, and it is removed when it is done.
  • Event-triggered: Functions are designed to be triggered directly and based on events such as cron time expressions, HTTP requests, message queues, and database operations. For instance, it is possible to call the startConversation function via an HTTP request when a new chat is started. Likewise, it is possible to launch the syncUsers function when a new user is added to a database.
  • Scalable: Functions are designed to run as much as needed in parallel so that every incoming request is answered and every event is covered.
  • Managed: Functions are governed by their platform so that the servers and underlying infrastructure is not a concern for FaaS users.

These properties of functions are covered by cloud providers' offerings, such as AWS Lambda, Google Cloud Functions, and Azure Functions; and on-premises offerings, such as Kubeless, Apache OpenWhisk, and OpenFass. With its high popularity, the term FaaS is mostly used interchangeably with the term serverless. In the following exercise, we will create a function to handle HTTP requests and illustrate how a serverless function should be developed.

Exercise 1: Creating an HTTP Function

In this exercise, we will create an HTTP function to be a part of a serverless platform and then invoke it via an HTTP request. In order to execute the steps of the exercise, you will use Docker, text editors, and a terminal.

Note

The code files for the exercises in this chapter can be found here: https://github.com/TrainingByPackt/Serverless-Architectures-with-Kubernetes/tree/master/Lesson01/Exercise1.

To successfully complete the exercise, we need to ensure the following steps are executed:

  1. Create a file named function.go with the following content in your favorite text editor:
    package main
    import (
        "fmt"
        "net/http"
    )
    func WelcomeServerless(w http.ResponseWriter, r *http.Request) {
    	fmt.Fprintf(w, "Hello Serverless World!")
    }

    In this file, we have created an actual function handler to respond when this function is invoked.

  2. Create a file named main.go with the following content:
    package main
    import (
        "fmt"
        "net/http"
    )
    func main() {
        fmt.Println("Starting the serverless environment..")
        http.HandleFunc("/", WelcomeServerless)
        fmt.Println("Function handlers are registered.")
        http.ListenAndServe(":8080", nil)
    }

    In this file, we have created the environment to serve this function. In general, this part is expected to be handled by the serverless platform.

  3. Start a Go development environment with the following command in your terminal:
    docker run -it --rm -p 8080:8080 -v "$(pwd)":/go/src --workdir=/go/src golang:1.12.5

    With that command, a shell prompt will start inside a Docker container for Go version 1.12.5. In addition, port 8080 of the host system is mapped to the container, and the current working directory is mapped to /go/src. You will be able to run commands inside the started Docker container:

    Figure 1.10: The Go development environment inside the container
    Figure 1.10: The Go development environment inside the container
  4. Start the function handlers with the following command in the shell prompt opened in step 3: go run *.go.

    With the start of the applications, you will see the following lines:

    Figure 1.11: The start of the function server
    Figure 1.11: The start of the function server

    These lines indicate that the main function inside the main.go file is running as

    expected.

  5. Open http://localhost:8080 in your browser:
    Figure 1.12: The WelcomeServerless output
    Figure 1.12: The WelcomeServerless output

    The message displayed on the web page reveals that the WelcomeServerless function is successfully invoked via the HTTP request and the response is retrieved.

  6. Press Ctrl + C to exit the function handler and then write exit to stop the container:
    Figure 1.13: Exiting the function handler and container
Figure 1.13: Exiting the function handler and container

With this exercise, we demonstrated how we can create a simple function. In addition, the serverless environment was presented to show how functions are served and invoked. In the following section, an introduction to Kubernetes and the serverless environment is given to connect the two cloud computing phenomena.