Book Image

Serverless Design Patterns and Best Practices

By : Brian Zambrano
Book Image

Serverless Design Patterns and Best Practices

By: Brian Zambrano

Overview of this book

Serverless applications handle many problems that developers face when running systems and servers. The serverless pay-per-invocation model can also result in drastic cost savings, contributing to its popularity. While it's simple to create a basic serverless application, it's critical to structure your software correctly to ensure it continues to succeed as it grows. Serverless Design Patterns and Best Practices presents patterns that can be adapted to run in a serverless environment. You will learn how to develop applications that are scalable, fault tolerant, and well-tested. The book begins with an introduction to the different design pattern categories available for serverless applications. You will learn thetrade-offs between GraphQL and REST and how they fare regarding overall application design in a serverless ecosystem. The book will also show you how to migrate an existing API to a serverless backend using AWS API Gateway. You will learn how to build event-driven applications using queuing and streaming systems, such as AWS Simple Queuing Service (SQS) and AWS Kinesis. Patterns for data-intensive serverless application are also explained, including the lambda architecture and MapReduce. This book will equip you with the knowledge and skills you need to develop scalable and resilient serverless applications confidently.
Table of Contents (18 chapters)
Title Page
Copyright and Credits
Dedication
Packt Upsell
Contributors
Preface
Index

Chapter 6. Asynchronous Processing with the Messaging Pattern

In the last chapter, we discussed the Fan-out Pattern, which we can implement using different strategies. At the end of that section, we reviewed an implementation of the Fan-out Pattern, which used AWS's Simple Queuing Service (SQS) as a destination for an event trigger. Queuing systems such as SQS provide a level of safety and security because they're intended to be a mostly durable persistent store where data lives until some process has the chance to pull it out, perform some work, and delete the item. If a downstream worker processes a crash entirely and processing stops for some time, queues merely back up, drastically reducing the risk of data loss. If a worker process runs into some unrecoverable problem in the middle of processing, queue items will typically be left on the queue to be retried by another processor in the future.

In this chapter, we will cover using queues as messaging systems to glue together multiple serverless...