Book Image

Docker for Serverless Applications

By : Chanwit Kaewkasi
Book Image

Docker for Serverless Applications

By: Chanwit Kaewkasi

Overview of this book

Serverless applications have gained a lot of popularity among developers and are currently the buzzwords in the tech market. Docker and serverless are two terms that go hand-in-hand. This book will start by explaining serverless and Function-as-a-Service (FaaS) concepts, and why they are important. Then, it will introduce the concepts of containerization and how Docker fits into the Serverless ideology. It will explore the architectures and components of three major Docker-based FaaS platforms, how to deploy and how to use their CLI. Then, this book will discuss how to set up and operate a production-grade Docker cluster. We will cover all concepts of FaaS frameworks with practical use cases, followed by deploying and orchestrating these serverless systems using Docker. Finally, we will also explore advanced topics and prototypes for FaaS architectures in the last chapter. By the end of this book, you will be in a position to build and deploy your own FaaS platform using Docker.
Table of Contents (15 chapters)
Title Page
Packt Upsell
Contributors
Preface
Index

What is serverless?


Try to imagine that we live in a world fully driven by software with a kind of intelligence.

It would be a world where we could develop software without doing anything. Just say what kind of software we would like to run, and minutes later, it would be there somewhere on the internet serving many users. And we would only pay for the number of requests made by our users. Well, that kind of world is too unreal.

Now, let's be more realistic and think of the world where we still need to develop software by ourselves. At least for now, we do not need to take care of any server provisioning and management. This is actually, at least, the best world for developers, where we can deploy our applications to reach millions of users without taking care of any server, or even not needing to know where these servers are. The only thing we actually want is to create an application that addresses the needs of the business at scale, at an affordable price. The serverless platforms have been created to address these problems.

As a response to developers and fast-growing businesses, serverless platforms seem to be a very huge win. But what exactly are they?

The relationship between serverless and FaaS

The following diagram illustrates the position of event-driven programming, FaaS, and serverless FaaS, where serverless FaaS is the intersection area between FaaS and serverless:

Figure 1.2: A Venn diagram illustrating the relationship between serverless and FaaS

Serverless is a paradigm shift that enables developers to not worry about server provisioning and operations. Billing would be pay-per-request. Also, many useful services are there on the public cloud for us to choose, connecting them together and use them to solve the business problems to get the job done.

Applications in the serverless architecture typically use third-party services to do other jobs such as authentication, database systems, or file storage. It is not necessary for serverless applications to use these third-party services, but architecting the application this way takes full advantage of the cloud-based serverless platforms. The frontend applications in this kind of architecture are usually a thick, fat, and powerful frontend, such as single-page applications or mobile applications.

The execution engine for this serverless computing shift is a Function as a Service or FaaS platform. A FaaS platform is a computing engine, that allows us to write a simple, self-contained, single-purpose function to process or compute a task. A compute unit of a FaaS platform is a function that is recommended to be stateless. This stateless property makes functions fully manageable and scalable by the platform.

A FaaS platform does not necessarily run on a serverless environment, such as AWS Lambda, but there are many FaaS implementations, such as OpenFaaS, the Fn Project, and OpenWhisk, that allow us to deploy and run FaaS on our own hardware. If a FaaS platform runs in the serverless environment, it would be called serverless FaaS. For example, we have OpenWhisk running locally, so it is our FaaS platform. But when it is running on IBM Cloud as IBM Cloud Functions, it is a serverless FaaS.

Every FaaS platform has been designed to use the event-driven programming model, to be able to connect efficiently to other services on the public cloud. With the asynchronous event model and the stateless property of functions, this environment makes serverless FaaS an ideal model for next-generation computing.

The disadvantages of serverless FaaS

But what are the drawbacks of this approach? They are as follows:

  • We basically do not own the servers. The serverless model is not suitable when we need fine-grained control over our infrastructure.
  • Serverless FaaS has a lot of limitations, notably the time limits of function execution, and memory limits for each function instance. It also introduces a fixed and specific way to develop applications. Maybe it is a bit hard to migrate the existing systems directly to FaaS.
  • It is impossible to fully use serverless platforms with private or hybrid infrastructure, if we are not allowed to migrate all workload out of the organization. One of the real benefits of serverless architectures is the existence of convenient public services on the cloud.

Docker to the rescue

This book discusses the balance between FaaS on our own infrastructure and serverless FaaS. We try to simplify and unify the deployment model of FaaS by choosing three major FaaS platforms that allow us to deploy Docker containers as functions, which we discuss in this book.

With Docker containers as deployment units (functions), Docker as a development tool, and Docker as the orchestration engine and networking layer, we can develop serverless applications and deploy them on our available hardware, on our own private cloud infrastructure, or a hybrid cloud that mixes our hardware together with the public cloud's hardware.

One of the most important points is that it is easy enough to take care of this kind of infrastructure using a small team of developers with Docker skills.

Looking back the previous Figure 1.2. If you're getting the clue after reading this chapter up to here, let's guess a bit that what would be the contents to be discussed in this book. Where should we be in this diagram? The answer is at the end of this chapter.