Book Image

Microservice Patterns and Best Practices

By : Vinicius Feitosa Pacheco
Book Image

Microservice Patterns and Best Practices

By: Vinicius Feitosa Pacheco

Overview of this book

Microservices are a hot trend in the development world right now. Many enterprises have adopted this approach to achieve agility and the continuous delivery of applications to gain a competitive advantage. This book will take you through different design patterns at different stages of the microservice application development along with their best practices. Microservice Patterns and Best Practices starts with the learning of microservices key concepts and showing how to make the right choices while designing microservices. You will then move onto internal microservices application patterns, such as caching strategy, asynchronism, CQRS and event sourcing, circuit breaker, and bulkheads. As you progress, you'll learn the design patterns of microservices. The book will guide you on where to use the perfect design pattern at the application development stage and how to break monolithic application into microservices. You will also be taken through the best practices and patterns involved while testing, securing, and deploying your microservice application. At the end of the book, you will easily be able to create interoperable microservices, which are testable and prepared for optimum performance.
Table of Contents (20 chapters)
Title Page
Dedication
Packt Upsell
Contributors
Preface
Index

Caching at the client level


A caching strategy is one of the most important items for discussion when it comes to web applications; with microservices it is no different.

When we speak of cache at the client level, it means that the request only passes to be processed on the backend, if really necessary. In other words, it tries to block direct access to the backend to requests that have already been implemented in the recent past.

A very useful tool for this strategy is the Varnish Cache, defined as: the Varnish Cache accelerator is a web application also known as reverse HTTP proxy caching. In the following diagram, we can see the operation of Varnish Cache:

The requests come from various types of web clients. Varnish Cache passes the request to the Server the first time only and stores the received data from the Server. If a second request for the same information already in the Varnish Cache is made, then Varnish Cache will answer the request, leaving the Server free of such access.

The Varnish Cache can store a number of different types of information in memory, but it is critical and targets the transmitted data. If the information is not componentized, Varnish always let the request go to the Server; you will have no way of knowing if the request is the same type.