-
Book Overview & Buying
-
Table Of Contents
-
Feedback & Rating
Serverless Architectures with Kubernetes
By :
Serverless and Kubernetes arrived on the cloud computing scene at about the same time, in 2014. AWS supports serverless through AWS Lambda, whereas Kubernetes became open source with the support of Google and its long and successful history in container management. Organizations started to create AWS Lambda functions for their short-lived temporary tasks, and many start-ups have been focused on products running on the serverless infrastructure. On the other hand, Kubernetes gained dramatic adoption in the industry and became the de facto container management system. It enables running both stateless applications, such as web frontends and data analysis tools, and stateful applications, such as databases, inside containers. The containerization of applications and microservices architectures have proven to be effective for both large enterprises and start-ups.
Therefore, running microservices and containerized applications is a crucial factor for successful, scalable, and reliable cloud-native applications. Also, the following two essential elements strengthen the connection between Kubernetes and serverless architectures:
Cloud computing and deployment strategies are always evolving to create more developer-friendly environments with lower costs. Kubernetes and containerization adoption has already won the market and the love of developers such that any cloud computation without Kubernetes won't be seen for a very long time. By providing the same benefits, serverless architectures are gaining popularity; however, this does not pose a threat to Kubernetes. On the contrary, serverless applications will make containerization more accessible, and consequently, Kubernetes will profit. Therefore, it is essential to learn how to run serverless architectures on Kubernetes to create future-proof, cloud-native, scalable applications. In the following exercise, we will combine functions and containers and package our functions as containers.
Possible answers:
In this exercise, we will package the HTTP function from Exercise 1 as a container to be a part of a Kubernetes workload. Also, we will run the container and trigger the function via its container.
The code files for the exercises in this chapter can be found here: https://github.com/TrainingByPackt/Serverless-Architectures-with-Kubernetes/tree/master/Lesson01/Exercise2.
To successfully complete the exercise, we need to ensure the following steps are executed:
Dockerfile in the same folder as the files from Exercise 1:FROM golang:1.12.5-alpine3.9 AS builder ADD . . RUN go build *.go FROM alpine:3.9 COPY --from=builder /go/function ./function RUN chmod +x ./function ENTRYPOINT ["./function"]
In this multi-stage Dockerfile, the function is built inside the golang:1.12.5-alpine3.9 container. Then, the binary is copied into the alpine:3.9 container as the final application package.
docker build . -t hello-serverless.Each line of Dockerfile is executed sequentially, and finally, with the last step, the Docker image is built and tagged: Successfully tagged hello-serverless:latest:

hello-serverless image with the following command in your Terminal: docker run -it --rm -p 8080:8080 hello-serverless.With that command, an instance of the Docker image is instantiated with port 8080 mapping the host system to the container. Furthermore, the --rm flag will remove the container when it is exited. The log lines indicate that the container of the function is running as expected:

http://localhost:8080 in your browser:
It reveals that the WelcomeServerless function running in the container was successfully invoked via the HTTP request, and the response is retrieved.

In this exercise, we saw how we can package a simple function as a container. In addition, the container was started and the function was triggered with the help of Docker's networking capabilities. In the following exercise, we will implement a parameterized function to show how to pass values to functions and return different responses.
In this exercise, we will convert the WelcomeServerless function from Exercise 2 into a parameterized HTTP function. Also, we will run the container and trigger the function via its container.
The code files for the exercises in this chapter can be found here: https://github.com/TrainingByPackt/Serverless-Architectures-with-Kubernetes/tree/master/Lesson01/Exercise3.
To successfully complete the exercise, we need to ensure that the following steps are executed:
function.go from Exercise 2 to the following:package main
import (
"fmt"
"net/http"
)
func WelcomeServerless(w http.ResponseWriter, r *http.Request) {
names, ok := r.URL.Query()["name"]
if ok && len(names[0]) > 0 {
fmt.Fprintf(w, names[0] + ", Hello Serverless World!")
} else {
fmt.Fprintf(w, "Hello Serverless World!")
}
}In the new version of the WelcomeServerless function, we now take URL parameters and return responses accordingly.
docker build . -t hello-serverless.Each line of Dockerfile is executed sequentially, and with the last step, the Docker image is built and tagged: Successfully tagged hello-serverless:latest:

hello-serverless image with the following command in the terminal: docker run -it –rm -p 8080:8080 hello-serverless.With that command, the function handlers will start on port 8080 of the host system:

http://localhost:8080 in your browser:
It reveals the same response as in the previous exercise. If we provide URL parameters, we should get personalized Hello Serverless World messages.
http://localhost:8080?name=Ece in your browser and reload the page. We are now expecting to see a personalized Hello Serverless World message with the name provided in URL parameters:

In this exercise, how generic functions are used with different parameters was shown. Personal messages based on input values were returned by a single function that we deployed. In the following activity, a more complex function will be created and managed as a container to show how they are implemented in real life.
The aim of this activity is to create a real-life function for a Twitter bot backend. The Twitter bot will be used to search for available bike points in London and the number of available bikes in the corresponding locations. The bot will answer in a natural language form; therefore, your function will take input for the street name or landmark and output a complete human-readable sentence.
Transportation data for London is publicly available and accessible via the Transport for London (TFL) Unified API (https://api.tfl.gov.uk). You are required to use the TFL API and run your functions inside containers.
Once completed, you will have a container running for the function:
When you query via an HTTP REST API, it should return sentences similar to the following when bike points are found with available bikes:
When there are no bike points found or no bikes are available at those locations, the function will return a response similar to the following:
The function may also provide the following response:
Execute the following steps to complete this activity:
main.go file to register function handlers, as in Exercise 1.function.go file for the FindBikes function.Dockerfile for building and packaging the function, as in Exercise 2.The files main.go, function.go and Dockerfile can be found here: https://github.com/TrainingByPackt/Serverless-Architectures-with-Kubernetes/tree/master/Lesson01/Activity1.
The solution for the activity can be found on page 372.
In this activity, we built the backend of a Twitter bot. We started by defining main and FindBikes functions. Then we built and packaged this serverless backend as a Docker container. Finally, we tested it with various inputs to find the closest bike station. With this real-life example, the background operations of a serverless platform and how to write serverless functions were illustrated.
Change the font size
Change margin width
Change background colour