Book Image

Learn AWS Serverless Computing

By : Scott Patterson
Book Image

Learn AWS Serverless Computing

By: Scott Patterson

Overview of this book

Serverless computing is a way to run your code without having to provision or manage servers. Amazon Web Services provides serverless services that you can use to build and deploy cloud-native applications. Starting with the basics of AWS Lambda, this book takes you through combining Lambda with other services from AWS, such as Amazon API Gateway, Amazon DynamoDB, and Amazon Step Functions. You’ll learn how to write, run, and test Lambda functions using examples in Node.js, Java, Python, and C# before you move on to developing and deploying serverless APIs efficiently using the Serverless Framework. In the concluding chapters, you’ll discover tips and best practices for leveraging Serverless Framework to increase your development productivity. By the end of this book, you’ll have become well-versed in building, securing, and running serverless applications using Amazon API Gateway and AWS Lambda without having to manage any servers.
Table of Contents (20 chapters)
Free Chapter
1
Section 1: Why We're Here
4
Section 2: Getting Started with AWS Lambda Functions
9
Section 3: Development Patterns
12
Section 4: Architectures and Use Cases

Understanding software architectures

The evolution of compute has also opened new possibilities regarding how we structure and architect our applications. Mark Richards talks about "Ingredients that go into an evolutionary cauldron agility, velocity, and modularity" (source: keynote talk, Fundamentals of software architecture at O'Reilly Software Architecture Conference in London 2016). By this, he means that, when we are talking about the topic of software architecture, we need the ability to change and evolve the way we do things in order to keep up with the ever-increasing expectations we have of new technologies. Instead of investing in complex analysis and planning, organizations should invest in making sure their software solutions include the essential evolutionary ingredients so they can change and evolve at an acceptable pace when they need to.

With the progress that's being made in serverless compute technologies today, we can unlock new levels of these key ingredients. In the upcoming sections, we are going to explore the various evolutionary steps in software architect at a high level, as follows:

  • Monolith—single code base
  • N-tier—achieve individual scale
  • Microservices—do one thing and do it well
  • Nanoservices with serverless

Monolith – single code base

In the earlier days, we used to create software that was single-tiered. The frontend and backend code was combined, and often the database was also running on the same machine. At the time, this was the best method for delivering new services to users. When user demand was high, it was necessary to scale the entire application and not just the parts that were being used the most. If this is sounding similar to the evolution of compute, you're right on the mark. The ability to distribute and independently scale our compute providers has directly influenced how software is architected, developed, and tested.

Let's dig into some of the challenges with building a solely-monolithic application:

  • Picture a team of 100 developers who all have the code checked out to their local dev environments. The team has multiple managers, ranging from change and release managers to human wellness and business owners.
  • There may also be another team responsible for testing the application once a major release becomes ready.
  • There's another team for operating the application once the features go live in production. All of these groups of people must coordinate and agree, often through a ticket management system, on a whole host of issues and decisions.
  • The time that's spent by developers organizing environments, lining up testers, and dealing with management noise greatly encroaches on the time that's available for writing actual business logic in code.
  • Adding to that, once the code is in production, it's very hard to make changes or add features.
  • The application has to be fully built again by passing multiple testing environment stages that may or may not be fully integrated. Then, they need to be scheduled for release during the next version.

If a monolith is kept relatively small, there is no issue with scaling that. However, if you increase the complexity, you're in for a whole set of challenges. This usually leads to an n-tier model, where we can scale individual components of the application.

N-tier – achieve individual scale

With the introduction of virtual machines also came software-defined networking. This gave us more flexibility in how we can build, configure, and scale networks. With the network also being virtual, we weren't bound by the requirement to have certain types of workloads on specific servers. Our public and private resources could now reside on the same physical server because of the hardware abstraction that was introduced by the hypervisor.

This means we can build more secure applications by separating the tiers of workloads. Our load balancing or web layer can be separated from the backend application server logic and data stores by a network barrier. We can hide the databases behind a network boundary that further protects from intrusion. Running the tiers on different virtual machines also means the code and configuration can be deployed independently (provided backward-compatibility is maintained).

Teams can now be responsible for their siloed areas of responsibility:

  • DBAs can maintain the databases.
  • Developers can deploy to the web or presentation tier.
  • Engineers can tune the balancing tier.

This sounds ideal in theory, but we still have the challenge of people working in silos. A developer has to request for a network engineer to open ports, or ask for a certain feature in a database to be enabled. The idea of ownership is still very much entrenched in the areas of responsibility, which can give rise to a culture of blame. Have you ever heard someone say works for me, or not my issue?

In terms of hardware utilization, the biggest impact here is that we can now scale by workload type. If the user demand was running a task that was creating a high load on the processing tier, we can add more compute nodes (virtual machines) to this tier, independently of the web tier. This is called scaling horizontally.

Amazon EC2 and Autoscale Groups is a great way to do this automatically in response to a certain metric threshold—for example, the number of user sessions or the utilization of a CPU. For on-demand EC2 instances, AWS charges for the amount of time that an instance is running.

The added benefit of autoscaling groups is that we can be elastic with our compute—we can scale it down when servers are underutilized, saving costs.

Of course, once our application grows significantly, we may have multiple developers working on different parts of the application. Each feature may evolve differently and may deal with data in different ways. Microservices can help break an application into domains where each can use their own implementation strategy and database.

Microservices – do one thing and do it well

Around the time Docker started becoming popular, software architecture was also evolving. The Service-Orientated Architecture (SOA) pattern was a common way of layering applications into services separated by a bus or other messaging system to allow each service to communicate with each other. The microservices approach follows the same principle of a loosely-coupled application and takes it a step further in terms of granularity.

Microservices focus more on decoupling an application, and the communication mechanism is lightweight and typically HTTP. The mantra of a microservice is that it should do one thing and do it well. Large complex applications can be broken down into components with a bounded context, meaning that developers think about building within a domain instead of a generic service. Think about the example of a new customer signup service using REST rather than an abstract data service:

Microrservice with one function performing all the operations
An added benefit is that they can also be deployed independently from other microservices.

Deploying independently means that development teams have the freedom to choose the runtime, define their own software life cycles, and choose which platform to run on. Naturally, containerization matches up on a lot of parallels. Containers make it easier to break the monolith into smaller applications that can be modular. Since we're now scaling at the microservice level, we can add more compute capacity by starting up more containers.

The DevOps revolution helped with the adoption of microservice application architectures as well. The benefits behind both industry trends included agility, deployability, scalability, and availability among others. Microservices allowed us to truly own what we built because we were already responsible for configuring and deploying all the tiers. When we apply microservices principles to the function level, we get nanoservices. Let's explore these.

Nanoservices with serverless

These days, with the use of serverless technology, we aim to deploy smaller pieces of code much faster. We're still developing within a domain, but we no longer have to worry about the details of our runtime. We're not focusing on building infrastructure, platform services, or application servers – we can use the runtime to write code that maps directly to business value.

A key reason for a serverless development model is that it lowers the Time to Value. The time it takes to define a problem, build the business logic, and deliver the value to a user has dramatically reduced. A factor that contributes to this is that developer productivity is not constrained by other dependencies, such as provisioning infrastructure. Furthermore, each developer can produce a higher value output because the code they are writing actually does useful things. Developers can ship releases sooner—a few days from dev to production—meaning the overall development costs are less as well.

Microservices can become more modular again with the introduction of nanoservices. While a microservice may accept multiple commands, for example, get user, create user, modify user attribute—a nanoservice will do exactly one thing, for example, get user. The lines between micro and nano are often blurred, and it raises a challenge as to how small or big we actually make that nanoservice.

A nanoservice still works within a particular domain to solve a problem, but the scope of functionality is narrower. Many nanoservices can make up one microservice, with each nanoservice knowing where to find the source data and how to structure the data for a meaningful response. Each nanoservice also manages its own error handling. For example, when paired with a RESTful API, this would mean being able to return a 5xx HTTP response. A 5xx response is an HTTP status code.

Nanoservice with one function for each API operation

Nanoservices can be helpful because they allow us to go deeper into the parts of the application that we are getting the most use out of. Reporting for cost control can be fine-grained and can also help a Product Owner prioritize the optimization of a particular function.

One key principle of a nanoservice is that it must be more useful that the overhead it incurs.

While the code within a function can be more simple, having many more functions increases the complexity of deployment, versioning, and maintaining a registry of functionality. As we'll find out later in this book, there is an application framework called the serverless framework that is very useful for managing these challenges. Something else that was released recently is the AWS Serverless Application Repository, which is a registry of useful domain logic (nanoservices) that developers can use as building blocks in their own functional applications.

In terms of communication overheads between nanoservices, it's advisable to minimize the number of synchronous callouts that a nanoservice does; otherwise, the service may be waiting a while for all the pieces of information it needs before the service can assemble a response.

In that case, a fire-and-forget invocation style where functions are called asynchronously may be more suitable or rethinking where the source data resides.

A nanoservice should be reusable but doesn't necessarily have to be usable as a complete service on its own. What it should follow is the Unix principles of small, composable applications that can be used as building blocks for larger applications. Think about the sed or grep Unix commands. These smaller utilities are useful by themselves, but can also be used to make up larger applications. An example of this is that you may have a microservice that is responsible for managing everything to do with room bookings in a hotel. A nanoservice may break this into specific discrete tasks to do with bookings, such as creating a new booking, finding a specific attribute, or performing a specific system integration. Each nanoservice can be used to make up the room booking workflow, and can also be used by other applications where useful.

Developing applications that are made up of nanoservices makes it easier to make changes to functions with a smaller potential impact. With serverless technologies such as AWS Lambda, it's also possible to deploy these changes without an outage to the service, provided the new change is still compatible with its consumers.

As we mentioned earlier, choosing to go to the granularity of a nanoservice comes with certain challenges. In a dynamic team with an ever-increasing number of nanoservices, thought has to be put into how to approach the following topics:

  • Service sprawl, where we have lots of nanoservices performing the same or similar functions.
  • Inter-service dependencies in terms of how we maintain which nanoservices have relationships with other services and data sources.
  • Too big or too small, that is, when do we make the distinction about when the overhead becomes too burdensome?
  • What is the usage pattern? Am I doing complex computational tasks, long-running tasks, or do I rely on a large amount of memory? Such patterns may be better suited for a microservice hosted in a container.

Some say that functions in the serverless world are the smallest level of granularity that we should abstract to. Next, we'll put our thinking hats on and see what we think may be coming up in our next architecture evolution.