Book Image

Building Serverless Microservices in Python

By : Richard Takashi Freeman
Book Image

Building Serverless Microservices in Python

By: Richard Takashi Freeman

Overview of this book

Over the last few years, there has been a massive shift from monolithic architecture to microservices, thanks to their small and independent deployments that allow increased flexibility and agile delivery. Traditionally, virtual machines and containers were the principal mediums for deploying microservices, but they involved a lot of operational effort, configuration, and maintenance. More recently, serverless computing has gained popularity due to its built-in autoscaling abilities, reduced operational costs, and increased productivity. Building Serverless Microservices in Python begins by introducing you to serverless microservice structures. You will then learn how to create your first serverless data API and test your microservice. Moving on, you'll delve into data management and work with serverless patterns. Finally, the book introduces you to the importance of securing microservices. By the end of the book, you will have gained the skills you need to combine microservices with serverless computing, making their deployment much easier thanks to the cloud provider managing the servers and capacity planning.
Table of Contents (13 chapters)
Title Page
Dedication

Understanding different architecture types and patterns

In this section, we will discuss different architectures, such as monolithic and microservices, along with their benefits and drawbacks.

The monolithic multi-tier architecture and the monolithic service-oriented architecture

At the start of my career, while I was working for global fortune 500 clients for Capgemini, we tended to use multi-tier architecture, where you create different physically separate layers that you can update and deploy independently. For example, as shown in the following three-tier architecture diagram, you can use Presentation, Domain logic, and Data Storage layers:

In the presentation layer, you have the user interface elements and any presentation-related applications. In domain logic, you have all the business logic and anything to do with passing the data from the presentation layer. Elements in the domain logic also deal with passing data to the storage or data layer, which has the data access components and any of the database elements or filesystem elements. For example, if you want to change the database technology from SQL Server to MySQL, you only have to change the data-access components rather than modifying elements in the presentation or domain-logic layers. This allows you to decouple the type of storage from presentation and business logic, enabling you to more readily change the database technology by swapping the data-storage layer.

A few years later at Capgemini, we implemented our clients' projects using SOA, which is much more granular than the multi-tier architecture. It is basically the idea of having standardized service contracts and registry that allows for automation and abstraction:

There are four important service properties related to SOA:

  • Each service needs to have a clear business activity that is linked to an activity in the enterprise.
  • Anybody consuming the service does not need to understand the inner workings.
  • All the information and systems are self-contained and abstracted.
  • To support its composability, the service may consist of other underlying services

Here are some important SOA principles:

  • Standardized
  • Loosely coupled
  • Abstract
  • Stateless
  • Granular
  • Composable
  • Discoverable
  • Reusable

The first principle is that there is a standardized service contract. This is basically a communication agreement that's defined at the enterprise level so that when you consume a service, you know exactly which service it is, the contract for passing in messages, and what you are going to get back. These services are loosely coupled. That means they can work autonomously, but also you can access them from any location within the enterprise network. They also offer an abstract version, which means that these services are a black box where the inner logic is actually hidden away, but also they can work independently of other services.

Some services will also be stateless. That means that, if you call a service, passing in a request, you will get a response and you would also get an exception if there is a problem with the service or the payload. Granularity is also very important within SOA. The service needs to be granular enough that it's not called inefficiently or many times. So, we want to normalize the level and the granularity of the service. Some services can be decomposed if they're being reused by the services, or services can be joined together and normalized to minimize redundancy. Services also need to be composable so you can merge them together into larger services or split them up.

There's a standardized set of contracts, but the service also needs to be discoverable. Discoverable means that there is a way to automatically discover what service is available, what endpoints are available, and a way to interpret them. Finally, the reasonable element, reuse is really important for SOA, which is when the logic can be reused in other parts of the code base.

Benefits of monolithic architectures

In SOA, the architecture is loosely coupled. All the services for the enterprise are defined in one repository. This allows us to have good visibility of the services available. In addition, there is a global data model. Usually, there is one data store where we store all the data sources and each individual service actually writes or reads to it. This allows it to be centralized at a global level.

Another benefit is that there is usually a small number of large services, which are driven by a clear business goal. This makes them easy to understand and consistent for our organization. In general, the communication between the services is decoupled via either smart pipelines or some kind of middleware.

Drawbacks of the monolithic architectures

The drawback of the monolithic architecture is that there is usually a single technology stack. This means the application server or the web server or the database frameworks are consistent throughout the enterprise. Obsolete libraries and code can be difficult to upgrade, as this is dependent on a single stack and it's almost like all the services need to be aligned on the same version of libraries.

Another drawback is that the code base is usually very large on a single stack stack, which means that there are long build times and test times to build and deploy the code. The services are deployed on a single or a large cluster of application servers and web servers. This means that, in order to scale, you need to scale the whole server, which means there's no ability to deploy and scale applications independently. To scale out an application, you need to scale out the web application or the application server that hosts the application.

Another drawback is that there's generally a middleware orchestration layer or integration logic that is centralized. For example, services would use the Business Process Management (BPM) framework to control the workflow, you would use an Enterprise Service Bus (ESB), which allows you to do routing your messages centrally, or you'd have some kind of middleware that would deal with the integration between the services themselves. A lot of this logic is tied up centrally and you have to be very careful not to break any inter-service communication when you're changing the configuration of that centralized logic.

Overview of microservices

The term microservice arose from a workshop in 2011, when different teams described an architecture style that they used. In 2012, Adrien Cockcroft from Netflix actually described microservice as a fine-grained SOA who pioneered this fine-grained SOA at web scale.

For example, if we have sensors on an Internet of Things (IoT) device, if there's a change of temperature, we would emit an event as a possible warning further downstream. This is what's called event-stream processing or complex-event processing. Essentially, everything is driven by events throughout the whole architecture.

The other type of design used in microservices is called domain-driven design (DDD). This is essentially where there is a common language between the domain experts and the developers. The other important component in DDD is the bounded context, which is where there is a strict model of consistency that relies in its bounds for each service. For example, if it's a service dealing with customer invoicing, that service will be the central point and only place where customer invoicing can be processed, written to, or updated. The benefits are that there won't be any confusion around the responsibilities of data access with systems outside of the bounded context.

You could think of microservice as centered around a REST endpoint or application programming interface using JSON standards. A lot of the logic could be built into the service. This is what is called a dumb pipeline but a smart endpoint, and you can see why in the diagram. We have a service that deals with customer support, as follows:

For example, the endpoint would update customer support details, add a new ticket, or get customer support details with a specific identifier. We have a local customer support data store, so all the information around customer support is stored in that data store and you can see that the microservice emits customer-support events. These are sent out on a publish-subscribe mechanism or using other publishing-event frameworks, such as Command Query Responsibility Segregation (CQRS). You can see that this fits within the bounded context. There's a single responsibility around this bounded context. So, this microservice controls all information around customer support.

Benefits and drawbacks of microservice architectures

The bounded context, and the fact that this is a very small code base, allow you to build very frequently and deploy very frequently. In addition, you can scale these services independently. There's usually one application server or web server per microservice. You can obviously scale it out very quickly, just for the specific service that you want to. In addition, you can have frequent builds that you test more frequently, and you can use any type of language, database, or web app server. This allows it to be a polygon system. The bounded context is a very important as you can model one domain. Features can be released very quickly because, for example, the customer services microservice could actually control all changes to the data, so you can deploy these components a lot faster.

However, there are some drawbacks to using a microservices architecture. First, there's a lot of complexity in terms of distributed development and testing. In addition, the services talk a lot more, so there's more network traffic. Latency and networks become very important in microservices. The DevOps team has to maintain and monitor the time it takes to get a response from another service. In addition, the changing of responsibilities is another complication. For example, if we're splitting up one of the bounded contexts into several types of sub-bounded context, you need to think about how that works within teams. A dedicated DevOps team is also generally needed, which is essentially there to support and maintain much larger number of services and machines throughout the organization.

SOA versus microservices

Now that we have a good understanding of both, we will compare the SOA and microservices architectures. In terms of the communication itself, both SOA and microservices can use synchronous and asynchronous communication. SOA typically relied on Simple Object Access Protocol (SOAP) or web services. Microservices tend to be more modern and widely use REpresentational State Transfer (RESTApplication Programming Interfaces (APIs).

We will start with the following diagram, which compares SOA and microservices:

The orchestration is where there's a big differentiation. In SOA, everything is centralized around a BPM, ESB, or some kind of middleware. All the integration between services and data flowing is controlled centrally. This allows you to configure any changes in one place, which has some advantages.

The microservices approach has been to use a more choreography-based approach. This is where an individual service is smarter, that is, a smart endpoint but a dumb pipeline. That means that the services know exactly who to call and what data they will get back, and they manage that process within the microservice. This gives us more flexibility in terms of the integration for microservices. In the SOA world or the three-tier architecture, there's less flexibility as it's usually a single code base and the integration is a large set of monolith releases and deployments of user interface or backend services. This can limit the flexibility of your enterprise. For microservices, however, these systems are much smaller and can be deployed in isolation and much more fine-grained.

Finally, on the architecture side, SOA works at the enterprise level, where we would have an enterprise architect or solutions architect model and control the release of all the services in a central repository. Microservices are much more flexible. Microservices talked about working at the project level where they say the team is only composed of a number of developers or a very small number of developers that could sit around and share a pizza. So, this gives you much more flexibility to make decisions rapidly at the project level, rather than having to get everything agreed at the enterprise level.