Home Cloud & Networking Hands-On Serverless Computing

Hands-On Serverless Computing

By Kuldeep Chowhan
books-svg-icon Book
eBook $39.99 $27.98
Print $48.99
Subscription $15.99 $10 p/m for three months
$10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
BUY NOW $10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
eBook $39.99 $27.98
Print $48.99
Subscription $15.99 $10 p/m for three months
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
  1. Free Chapter
    What is Serverless Computing?
About this book
Serverless applications and architectures are gaining momentum and are increasingly being used by companies of all sizes. Serverless software takes care of many problems that developers face when running systems and servers, such as fault tolerance, centralized logging, horizontal scalability, and deployments. You will learn how to harness serverless technology to rapidly reduce production time and minimize your costs, while still having the freedom to customize your code, without hindering functionality. Upon finishing the book, you will have the knowledge and resources to build your own serverless application hosted in AWS, Microsoft Azure, or Google Cloud Platform, and will have experienced the benefits of event-driven technology for yourself. This hands-on guide dives into the basis of serverless architectures and how to build them using Node.js as a programming language, Visual Studio Code for code editing, and Postman for quickly and securely developing applications without the hassle of configuring and maintaining infrastructure on three public cloud platforms.
Publication date:
July 2018
Publisher
Packt
Pages
350
ISBN
9781788836654

 

What is Serverless Computing?

Before we start creating serverless applications to run in Amazon Web Services (AWS), Microsoft Azure, and the Google Cloud Platform (GCP), let's learn about what serverless computing is. We will learn about how it is being used, and about Function as a Service (FaaS) and the benefits that serverless computing provides. Readers will also learn about what serverless computing isn't.

In this chapter, we will cover the following topics:

  • What is serverless computing?
  • Use cases of serverless computing
  • Unpacking FaaS
  • What serverless computing is not
  • The benefits of serverless computing
  • Best practices to get the maximum benefits out of serverless computing
 

What is serverless computing?

Serverless applications still require servers to run; hence, the term is a misnomer. Serverless computing adds a layer of abstraction on top of a cloud infrastructure, so developers need not worry about provisioning and managing physical or virtual servers in the cloud. Serverless computing gives you the capability to develop, deploy, and run applications without having to think about provisioning and management of servers. Serverless computing doesn't need you to provision, manage, and scale the infrastructure required to execute your application. The cloud provider automatically provisions the infrastructure required for your function to run and scale your serverless application with high availability.

In most cases, when people think of serverless computing, they are likely to think of applications with backends that run on cloud providers, such as AWS, that are fully managed, are event triggered, and are ephemeral, lasting only for the invocation that runs within a stateless container. While serverless applications are fully managed and hosted by cloud providers, such as AWS, it is a misinterpretation to say that the applications are completely running serverless. Servers are still involved in the execution of these serverless applications; it is just that these are managed by cloud providers, such as AWS. One way to think about these serverless applications is as FaaS. The most popular implementation of FaaS at the moment is AWS Lambda. However, there are other FaaS implementations from Microsoft Azure called Azure Functions, from GCP called Cloud Functions, and from the open source serverless platform from Apache called OpenWhisk.

Serverless computing is all about the modern developer's expanding frame of reference.

What we have seen is that the atomic unit of scale has been changing from the virtual machine to the container, and if you take this one step further, we start to see something called function—a single-purpose block of code. It is something you can analyze very easily. It can:

  • Process an image
  • Transform a piece of data
  • Encode a piece of video

The following diagram depicts the difference between Monolothic, Microservice, and FaaS architectures:

Monolithic versus Microservice versus FaaS

Developing serverless applications means that you can focus on solving the core business problem instead of spending time on how to operate and manage runtimes, or compute infrastructure, either in on-premises, or in the cloud. With this reduced overhead, developers can reclaim the time and energy that they would usually spend on developing solutions to provision, manage, and scale the applications that are highly available and reliable. I will touch upon this more later in the book.

The real goal of the serverless computing movement is to provide a high level of abstraction of the compute infrastructure so that developers can focus on solving critical business problems, deploy them rapidly, and reduce the time it takes for business ideas to be marketed, as opposed to the time it takes if you use traditional infrastructure for your applications.

Serverless and event-driven collision

Event-driven computation is an architecture pattern that emphasizes action in response to or based on the receptions of events. This pattern promotes loosely coupled services and ensures that a function executes only when it is triggered. It also encourages developers to think about the type of events and responses a function needs in order to handle these events before programming the functions:

Event-driven architecture example

A system built with event-driven architecture consists of Event Producers that produce a stream of events that are ingested using Event Ingestion, and then Event Consumers that listen for the events, as shown in the preceding diagram.

Events are delivered to consumers in near real time so that they can respond immediately to the events as they happen. Event producers are decoupled from the event consumers. As they are decoupled, the producers don't know which consumers are listening to the events that they produce. Event consumers are also decoupled from one another, and every event consumer sees every events produced by the event producers.

Serverless applications are usually built by combining multiple functions (FaaS) using offerings, such as AWS Lamdba or Microsoft Azure Functions, together with external backend resources, such as Amazon S3, Amazon DynamoDB, and many more solutions to manage the state between invocations. The architecture that ties these multiple functions (FaaS) is event-driven architecture. By combining tools, such as AWS Lambda and Amazon S3, you can develop applications without having to think about the provision and management of the infrastructure. As this is a shift in architecture compared to how you might have operated so far, you would also need to change how data will flow through your application, which is made up of functions.

In this event-driven architecture, the functions are event consumers because they are expected to come alive when an event occurs and are responsible for processing it. Some examples of events that trigger serverless functions include the following:

  • API requests
  • Scheduled events
  • Events in object storage
  • Events in databases
  • Notification events
 

What is FaaS?

I've mentioned FaaS a few times already, so let's dig into what it really means. Serverless computing involves code that runs as a service on an infrastructure that is fully managed by the cloud provider. This is automatically provisioned, based on an event, and is automatically scaled to ensure high availability. You can think of this as FaaS that run on stateless, ephemeral containers created and maintained by the cloud provider. You might have already come across terms such as Software as a Service (SaaS), Infrastructure as a Service (IaaS), and Platform as a Service (PaaS). Let's look at what they mean. SaaS is a form of cloud computing in which software is licensed based on a subscription, hosted centrally, and is delivered remotely by the provider over the internet. Examples of SaaS are Google Apps, Citrix GoToMeeting, and Concur. IaaS is a form of cloud computing where the provision and management of compute infrastructure resources occur over the internet, scales up quickly, and you pay for what you use. Examples of IaaS are Azure Virtual Machines and AWS EC2. PaaS is a form of cloud computing where the software and infrastructure that are needed for application development are provided over the internet by the provider. Examples of PaaS are AWS Beanstalk and Azure App Services.

Let's also look at AWS Lambda to learn more about FaaS.

AWS Lambda helps you run code without supplying or administrating servers. You pay only for the total time you consume—no charge is applicable when your code is not running. Using Lambda, one can run code for virtually any type of application or backend service—all with zero administration. We need to upload the code and Lambda looks into everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.

Let's look at features that AWS Lambda offers that fit into FaaS paradigm:

  • Essentially, FaaS runs code without having to provision and manage servers on your own and is executed based on an event rather than running all the time. With traditional architectures that involve containers or physical/virtual servers, you would need the servers be running all the time. As the infrastructure is used only based on the requirements, you achieve 100 percent server utilization, and cost savings are also huge as you pay only for the compute time you consume when the function/Lambda runs.
  • With FaaS solutions, you can run any type of application, which means there isn't a restriction on what languages you need to write the code in or on particular frameworks to be used. For example, AWS Lambda functions can be written in JavaScript, Python, Go, C# programming languages, and any JVM language (Java, Scala, and so on).
  • The deployment architecture for your code is also different from traditional systems, as there is no server to update yourself. Instead, you just upload your latest code to the cloud provider, AWS, and it takes care of making sure the new version of the code is used for subsequent executions.
  • AWS handles scaling of your function automatically based on the requests to process without any further configuration from us. If your function needs to be executed 10,000 times in parallel, AWS handles scaling up the infrastructure that is required to run your function 10,000 times in parallel. The containers that are executing your code are stateless, ephemeral with AWS provisioning, and destroyed only for the duration that is driven by the runtime needs.
  • In AWS, functions are triggered by different event types, such as S3 (file) updates, scheduled tasks based on a timer, messages sent to Kinesis Stream, messages sent to SNS topics, and many more event type triggers.
  • AWS also allows functions to be triggered as a response to HTTP requests through Amazon API Gateway.

State

Functions, as they run in ephemeral containers, have significant restrictions when it comes to management of state. You need to design your functions in such a way that the subsequent run of your function will not be able to access state from a previous run. In short, you should develop your functions with the point of view that they are stateless.

This affects how you design the architecture for your application. Considering that functions are stateless, you need to use external resources to manage the state of your application so that the state can be shared between runs of the functions. Some of the popular external resources that are widely using the FaaS architecture are Amazon S3, which provides a simple web services interface that you can use to store and retrieve any amount of data, at any time, from anywhere on the web, caching solutions, such as Memcached or Redis, and database solutions, such as Amazon DynamoDB, which is a fast and flexible NoSQL database service for any scale.

Execution duration

Functions are limited in how long each invocation is allowed to run. Each cloud provider has different time limits for their FaaS offering, after which the function execution will be terminated. Let's look at the different timeouts provided by each cloud provider for functions:

  • AWS Lambda—5 minutes
  • Azure Functions—10 minutes
  • Google Functions—9 minutes

As the execution of the functions are limited by the time limit set by the providers, certain architectures with long running processes are not suited for the FaaS architecture. If you still want to fit those long running processes into a FaaS architecture, then you would need to design the architecture so that several functions are coordinated to accomplish a long running task, as oppose to in a traditional architecture where everything would be handled within the same application.

Understanding cold start

What is cold start?

Cold start = time it takes to boot a computer system

What is cold start in FaaS?

When you execute a cold (inactive) function for the first time a cold start occurs. The cold start time is when the cloud provider provisions the required runtime containers, downloads your code for the functions, and then runs your functions. This increases the execution time of the function considerably, as it may take more time to provision certain runtimes before your function gets executed. The converse of this is when your function is hot (active), which means that the container with your code required to execute the function stays alive, ready and awaiting for execution. If your function is running, it is considered active and if there is certain period of inactivity, then the cloud provider will drop the runtime container with your code in it to keep the operating costs low and at that point your function is considered cold again.

The time it takes for cold start varies between different runtimes. If your function utilizes runtimes such as Node.js or Python, then the cold start time isn't significantly huge; it may add < 100 ms overhead to your function execution.

If your function utilizes runtimes such as JVM, then you will see cold start times greater than a few seconds while the JVM runtime container is being spun up. The cold start latency has significant impact in the following scenarios:

  • Your functions are not invoked frequently and are invoked once every 15 minutes. This will add noticeable overhead to your function execution.
  • Your functions will see sudden spikes in your function execution. For example, your function may be typically executed once per second, but it suddenly ramps up to 50 executions per second. In this case, you will also see noticeable overhead to your function execution.

Understanding and knowing about this performance bottleneck is essential when you are architecting your FaaS application so that you can take this into account to understand how your functions operate.

Some analysis has been done to understand the container initialization times for AWS Lambda:

  • Containers are terminated after 15 minutes of inactivity
  • Lambda within a private VPC increases container initialization time

People overcome this by pinging their Lambda once every 5 or 10 minutes to keep the runtime container for a function alive and also preventing it from going into a cold state.

Is cold start an issue of concern? Whether your function will have a problem like this, or not, is something that you need to test with production, such as with load, and understand the overhead that cold start adds to your FaaS application.

API gateway

One of the things that I mentioned about FaaS earlier is the API Gateway. An API Gateway is a layer that stands in front of backend HTTP services or other resources, such as FaaS, and decides where to route the HTTP request based on the route configuration defined in the API gateway solution. In the context of FaaS, the API Gateway maps the incoming HTTP request parameters to the inputs to the FaaS function. The API gateway then transforms the response that it receives from the function and converts it into an HTTP response and returns that HTTP response back to the caller of the API gateway.

Each cloud provider has an offering in this space:

  • AWS has an offering called API gateway
  • Microsoft Azure has an offering called Azure API management
  • GCP has an offering called Cloud Endpoints

The working of the Amazon API gateway is as shown in the following figure:

How the Amazon API Gateway works

API gateways provide additional capabilities along with routing the requests, including:

  • Authentication
  • Throttling
  • Caching
  • Input validation
  • Response code mapping
  • Metrics and logging

The best use case for FaaS + API gateway is the creation of a feature-rich HTTP-based microservice with scaling, monitoring, provision, and management all taken care of by the provider in a true serverless computing environment.

 

The benefits of serverless computing

So far I have covered the definitions of serverless computing and FaaS. Let's now look at the benefits of serverless applications. Let's look at the benefits that serverless applications provide:

  • Reduced operational cost
  • Rapid development
  • Scaling costs
  • Easier operational management

Reduced operational cost

As we covered earlier in the book, serverless or FaaS applications run on the cloud provider in a fully managed environment. As there are only certain pre-defined runtimes where you can run your functions and the same runtimes are used by many other people within the cloud provider environment, the cloud provider is able to utilize the economy of scale effect and reduce the operational cost required to run your functions. The gains in cost that the cloud provider gets are also due to the fact that everyone shares the same infrastructure, including networking, and everyone will be using it only for a certain period of time rather than using it all the time, like in a traditional application.

These reduced costs of running the functions are passed down to the people consuming these FaaS offerings. They are charged very little for the execution of the functions and are charged in ms intervals. You will see similar benefits in using IaaS and PaaS solutions; however, FaaS takes it to the next level.

Rapid development

As FaaS applications run in an infrastructure that is fully managed by the cloud provider, automatically provisioned and automatically scaled based on the load by the cloud provider, you as a developer can just focus on writing the code that is required to solve your business problem rather than spend time figuring out how to spin up the server, making sure that it is highly available and scales based on the load.

AWS, Microsoft Azure, and GCP all provide various tools to develop and deploy your functions to their environment easily. We will talk about these in later chapters when we talk about setting your development environment and when we are ready to deploy our serverless applications.

Scaling costs

With FaaS offerings, horizontal scaling is something that is automatically managed by the cloud provider. They also take care of scaling up and scaling down the infrastructure that is required to process your functions. What this means is that you don't have to spend any time in developing the solutions yourself, everything is automatically taken care of for you. The cloud provider also takes care of ensuring that your functions are run in an environment that is highly available and that it runs across multiple availability zones (AZs) within a region. The other big benefit that you get is that the cloud provider takes care of patching the infrastructure automatically for you and you don't have to worry about it when there are security vulnerabilities to patch for the operating system or runtimes.

The big benefit with FaaS offerings is that for the compute infrastructure you only pay for what you need down to 100 ms execution time (AWS Lambda). Based on what your application traffic patterns are, this could result in huge cost savings for you:

Cost savings with serverless compared to traditional infrastructure

If you look at the preceding diagram, you will see that in a traditional infrastructure environment, you will always scale up in step functions to ensure that you are able to handle the load without effecting your application, but in a serverless/FaaS infrastructure environment, you scale up for what you need and don't have to scale up in step functions to handle your peak load. This gives you tremendous cost savings.

Another example would be, let's say, your application with traditional infrastructure processes only one request every minute and takes 100 ms to process each request, which will result in CPU usage of about 0.2 percent for the entire hour. If you look at the utilization of this infrastructure, it is very inefficient and you could have a lot of other applications running on the same infrastructure without impacting your application. With FaaS offerings, you will only pay for the 100 ms or 200 ms of computer time per minute, as that is what is used by your application and that would result in an overall time of 0.3% for the entire hour, which will result in huge savings for running your application in serverless architecture.

Easier operational management

So far we have talked about the benefits that serverless computing provides in areas such as scaling, rapid development, and reduced operational costs. Let's look at how FaaS offerings help us in reduced operation management of the application.

Easier packaging and deployment

The packaging and deployment of your application in a FaaS environment is really simple compared to deploying your application to a container or even an entire server. With FaaS, all you will be doing is creating your artifact (JAR (JVM)/ZIP (Node.js or Python)) and then uploading them directly to the FaaS offerings. You don't need any additional deployment or configuration management tools to deploy your applications to a FaaS environment. Some of the cloud providers do provide access to write the application directly in the console itself.

As there is no need to have additional configuration or deployment tools, system administration is significantly reduced.

Time to market

As the operational management in a FaaS environment is significantly reduced, the time it takes to make a business idea a reality is significantly less than what it takes in a traditional infrastructure environment, which means there is more time to try out more business ideas, which in turn is a win for your business. As less time is spent on operations, you have more people to work on solving business problems, which in turn can help you bring your idea to market faster.

 

What serverless computing is not

So far, I have talked about serverless computing and the benefits that it provides over traditional infrastructure architectures. Let's spend some time understanding what's not serverless as well before we look at the drawbacks of serverless computing.

Comparison with PaaS

Given that serverless applications or FaaS applications do tend to be similar to The Twelve-Factor applications, people often compare them to PaaS offerings. However, if you look closely, PaaS offerings are not built to bring your entire application up and down for every request, but FaaS or serverless applications are built to do that. For example, if you look at a popular PaaS offering from AWS, AWS Elastic Beanstalk, it is not designed to bring up the entire infrastructure every time a request is processed. However, it does provide lots of capabilities to the developers, where it makes the deployment and management of web applications very easy.

The other key difference between PaaS and FaaS is scaling. With PaaS offerings, you still need a solution to handle the scaling of your application to the load. But with FaaS offerings, this is completely transparent. Lots of PaaS solutions, such as AWS Elastic Beanstalk, do offer autoscaling based on different parameters, such as CPU, memory, or network. However, this is still not tailored towards individual requests, which makes FaaS offerings much more efficient in scaling and provides best bang for buck in terms of cost.

Comparison with containers

Another comparison that is quite popular for FaaS offerings is with containers.

Let's look at what containers are, as we haven't discussed them so far. Containerization of the infrastructure has been around for a long time, but it was made popular in 2013 by Docker when they combined operation system level virtualization with filesystem images on Linux. With this, it became easy to build and deploy containerized applications. The biggest benefit that Docker-containers provides is that, unlike Virtual Machines (VMs), they share the same operating system as the host, which tremendously reduces the size of the image that is required to run a containerized application. As the footprint of Docker containers is less, you can run multiple Docker containers on the same host without significant impact on the host system.

As container runtimes gained popularity, container platforms started to evolve. Some of the notable ones are:

  • CloudFoundry
  • Apache Mesos

The next step in the evolution of containerization was providing an API that let developers deploy Docker images across a fleet of compute infrastructures. The most popular container orchestration and scheduling platform that is out there right now is Kubernetes. It started as an internal project within Google in 2004, called Borg. Later, Google open sourced the solution as Kubernetes and it is right now the most commonly used container platform out there. There are other solutions as well, such as Mesosphere.

And even further advancement in container solutions was cloud-hosted container platforms, such as Amazon EC2 Container Service (ECS) and Google Container Engine (GKE), which, like serverless offerings, eliminates the need to provision and manage the master nodes in the cluster. However, you still have to provision and manage the nodes on which the containers will be scheduled to run. There are a lot of solutions that are out there that help in the maintenance of nodes within the cluster. AWS introduced their new container offering called Fargate, which is completely serverless, where you don't have to provision the nodes within the cluster ahead of time. It is still early days for Fargate, but if it offers what it promises, it will certainly help a lot of developers who don't have to spend time provisioning and managing nodes within the cluster.

Fundamentally, the same argument that I made for PaaS still holds true with containers as well. For serverless offerings or FaaS offerings, scaling is automatically managed and is transparent. The scaling also has fine grain control. Container platforms do not yet offer such a solution. There are scaling solutions within container platforms, such as Kubernetes Horizontal Pod Autoscaling, which do scale based on the load on the system. However, at the moment, they don't offer the same level of control that FaaS offerings provide.

As the gap of scaling and management between FaaS offerings and hosted container solutions narrow, the choice between the two may come down to the type of the application. For example, you would choose FaaS for applications that are event driven and containers for request-driven components with a lot of entry points. I expect FaaS offerings and container offerings to merge in the coming years.

#NoOps

Serverless computing or FaaS doesn't mean No Ops completely. Definitely, there is no system administration required to provision and manage infrastructure to run your applications. However, there are still some aspects of operations (Ops) involved in making sure your application is running the way it is supposed to run. You need to monitor your application using the monitoring data that FaaS offerings provide and ensure that they are running optimally.

Ops doesn't just mean system administration. It also includes monitoring, deploying, securing, and production debugging. With serverless apps, you still have to do all these and, in some cases, it may be harder to do Ops for serverless applications as the tool set is relatively new.

 

Limits to serverless computing

So far, I have talked about many things regarding serverless computing. Let's look at the drawbacks of serverless computing as well.

Serverless architecture has its limits as well and it is very important to realize the drawbacks to implementing serverless applications along with recognizing when to use serverless computing and how to implement it so that you will be able to address these concerns ahead of time. Some of these limits include:

  • Infrastructure control
  • Long running application
  • Vendor lock-in
  • Cold start
  • Shared infrastructure
  • Server optimizations is a thing of past
  • Limited number of testing tools

We will look at the options for addressing some of these issues in the following sections.

Infrastructure control

As I mentioned previously, with a serverless architecture you will not have access to control the underlying infrastructure as that is controlled by the cloud provider. However, developers are still able to choose which runtime that they want to run their function on, such as Node.js, Java, Python, C#, and Go. They still have control over the choice of the memory requirements for their function and the timeout duration for their function execution.

Long running application

One of the benefits of serverless architectures is that they are built to be fast, scalable, event-driven functions. Therefore, long-running batch operations are not well suited for this architecture. Most cloud providers have a timeout period of five minutes, so any process that takes longer than this allocated time is terminated. The idea is to move away from batch processing and into real-time, quick, responsive functionality.

Vendor lock-in

One of the biggest fears with serverless applications concerns vendor lock-in. This is a common fear with any move to cloud technology. For example, if you start committing to using Lambda, then you are committing to using AWS and either you will not be able to move to another cloud provider or you will not be able to afford transition to a cloud provider.

While this is understandable, there are many ways to develop applications to make a vendor switch using functions easier. A popular and preferred strategy is to pull the cloud provider logic out of the handler files so it can easily be switched to another provider. The following code illustrates a poor example of abstracting cloud provider logic.

The following code shows the handler file for a function that includes all of the database logic bound to the FaaS provider (AWS, in this case):

const database = require('database').connect();
const mail = require('mail');
module.exports.saveCustomer = (event, context, callback) => {
const customer = {
emailAddress: event.email,
createdAt: Date.now(),
};
database.saveCustomer(customer, (err) => {
if (err) {
callback(err);
} else {
mail.sendEmail(event.email);
callback();
}
});
};

The following code illustrates a better example of abstracting the cloud provider logic.

The following code shows a handler file that is abstracted away from the FaaS provider logic by creating a separate Users class:

class Customers {
constructor(database, mail) {
this.database = database;
this.mail = mail;
}
save(emailAddress, callback) {
const customer = {
emailAddress: emailAddress,
createdAt: Date.now(),
};
this.database.saveCustomer(customer, (err) => {
if (err) {
callback(err);
} else {
this.mail.sendEmail(emailAddress);
callback();
}
});
}
}
module.exports = Customers;
const database = require('database').connect();
const mail = require('mail');
const Customers = require('customers');
let customers = new Customers(database, mail);
module.exports.saveCustomer = (event, context, callback) => {
customers.save(event.email, callback);
};

The second method is preferable both for avoiding vendor lock-in and for testing. Removing the cloud provider logic from the event handler makes the application more flexible and applicable to many providers. It makes testing easier by allowing you to write unit tests to ensure it is working properly in a traditional way. You can also write integration tests to verify that integrations with external services are working properly.

Most of the serverless offerings by cloud providers are implemented in a similar way. However, if you had to switch vendors, you would definitely need to update your operational toolsets that you use for monitoring, deployments, and so on. You might have to change your code's interface to be compatible with the new cloud provider.

If you are using other solutions provided by the cloud vendor that are very much specific to the cloud provider, then moving between vendors becomes extremely difficult, as you would have to re-architect your application with the solutions that the new cloud provider provides.

Cold start

I have already discussed cold start earlier in the chapter. Let's recap about the concern we have regarding cold start. The concern about cold start is that a function takes slightly longer to respond to an event after a period of inactivity. The time it takes varies between the run times that your application chooses.

This does tend to happen, but there are ways around the cold start if you need an immediately responsive function. If you know your function will only be triggered periodically, an approach to overcoming the cold start is to establish a scheduler that calls your function to wake it up every so often. Within AWS, you can use CloudWatch Event Scheduler to have your Lambda function invoked every 5 to 10 minutes so that AWS Lambda will not mark your function as inactive or cold. Azure Functions and Google Functions also have similar capability to have functions invoked on a schedule.

Shared infrastructure

A multi-tenant infrastructure or shared infrastructure refers to an infrastructure where multiple applications of different customers (or tenants) are being run on the same machine. It is a very well-known strategy to achieve the economy of scale benefits that I mentioned earlier as well. This could be a concern from a business perspective since serverless applications can run alongside one another regardless of business ownership. Although this doesn't affect the code, it does mean the same availability and scalability will be provided across competitors. There could be situations where your code might be affected as well due to noisy neighbors (high load generating function). A multi-tenant infrastructure also has problems related to security and robustness, where one customer's function can take down another customer's function.

This problem is not unique to serverless offerings—they exist in many other service offerings that use multi-tenancy, such as Amazon EC2 and container platforms.

Server optimization is a thing of the past

As I mentioned earlier, you will not have access to control any aspects of the underlying infrastructure where your functions are executed by the cloud provider. As there is no access to the underlying infrastructure, you will lose access to optimizing the server for your application to improve performance for your clients. If you need to perform server optimizations so that your application can run optimally, then use an IaaS offerings, such as AWS EC2 or Microsoft Azure Virtual Machines.

Security concerns

Serverless computing also has security issues. However, the types of security issues that I have to deal with are significantly better than what I have to deal with when running applications on a traditional infrastructure. The same set of security issues related to your application remain the same in serverless computing. Different serverless offerings use different security implementations so as you start to use different serverless offerings for your applications it increases the surface area that is required for malicious intent and also increases the chances of a security attack.

Deployment of multiple functions

The tooling for deploying a single FaaS function is very robust at the moment. However, the tooling required to deploy multiple functions at the same time or co-ordination of deploying multiple functions is lacking.

Consider a case where you have multiple functions that make a serverless application and you need to deploy all of them at once. There are not many tools out there that can do that for you. The tooling to ensure zero downtime for serverless applications is not robust enough yet.

There are open source solutions, such as Serverless Framework, that are helping to solve some of these issues, but they can only be done with support from the cloud provider. AWS built the AWS Serverless Application model to address some of these concerns, which I will talk about in later chapters.

Limited number of testing tools

One of the limitations to the growth of serverless architectures is the limited number of testing and deployment tools. This is anticipated to change as the serverless field grows, and there are already some up-and-coming tools that have helped with deployment. I anticipate that cloud providers will start offering ways to test serverless applications locally as services. Azure has already made some moves in this direction, and AWS has been expanding on this as well. NPM has released a couple of testing tools so you can test locally without deploying to your provider. Some of these tools include node-lambda and aws-lambda- local. One of my current favorite deployment tools is the serverless Framework deployment tool (https://serverless.com/framework/). It is compatible with AWS, Azure, Google, and IBM. I like it because it makes configuring and deploying your function to your given provider incredibly easy, which also contributes to a more rapid development time.

Integration testing of your serverless applications is hard. As with the FaaS environment, you depend on external resources to maintain state. You need to make sure your integration tests cover these scenarios as well. Typically, as there are not a lot of solutions out there where you can run these external resources locally, people stub these external resources for the purpose of integration testing. The challenge will be in making sure that the stubs that you create are always in sync with the implementation of the external resources and some vendors may not even provide a stubbed implementation for their resources.

To ensure that your functions works, integration tests are usually run on production-like environments with all the necessary external resources in place. As our functions are very small compared to traditional infrastructure, we need to rely more on integration testing to ensure that our functions run optimally.

 

Summary

In this chapter, you learned about serverless applications and architecture, the benefits and use cases, and the limits to using the serverless approach. It is important to understand the serverless architecture and what it encompasses before designing an application that relies on it. Serverless computing is an event-driven, FaaS technology that utilizes third-party technology and servers to remove the problem of having to build and maintain infrastructure to create an application.

Serverless computing may not be the right approach for every problem that is out there, so be cautious if anyone says that serverless computing will replace all of your existing application architectures. Serverless computing might be the answer for the new architectures that you are building now and it may very well replace your existing architectures but there are drawbacks as well to serverless computing, so keep those in mind when you design your serverless application architecture. As the tooling around serverless computing improves in the coming years, you will see that these drawbacks will become a thing of the past.

The benefits that serverless computing provides are significant. They include reduced operational costs, and rapid development, and deployment of your serverless applications. Other benefits include easier operational management and reduced environmental impact through better utilization of compute infrastructures.

The next chapter will discuss the tools and a programming language that is required to write your Serverless applications. It will introduce the different SDKs that AWS, Microsoft Azure, and Google Cloud offer to write serverless applications.

About the Author
  • Kuldeep Chowhan

    Kuldeep Chowhan is a Principal Software Developer at Expedia Group. He has been involved in building tools and platforms for the last 5+ years at Expedia. He has extensive experience on using serverless technologies on AWS (such as AWS Lambda, API Gateway, and DynamoDB) through Node.js. He has built a Platform as a Service (PaaS) tool for the automated creation of source code, a CI/CD pipeline, and a fully automated pipeline for deploying Docker containers/AWS Lambda. He is also passionate about CI/CD and DevOps.

    Browse publications by this author
Hands-On Serverless Computing
Unlock this book and the full library FREE for 7 days
Start now