Book Image

Mastering Service Mesh

By : Anjali Khatri, Vikram Khatri
Book Image

Mastering Service Mesh

By: Anjali Khatri, Vikram Khatri

Overview of this book

Although microservices-based applications support DevOps and continuous delivery, they can also add to the complexity of testing and observability. The implementation of a service mesh architecture, however, allows you to secure, manage, and scale your microservices more efficiently. With the help of practical examples, this book demonstrates how to install, configure, and deploy an efficient service mesh for microservices in a Kubernetes environment. You'll get started with a hands-on introduction to the concepts of cloud-native application management and service mesh architecture, before learning how to build your own Kubernetes environment. While exploring later chapters, you'll get to grips with the three major service mesh providers: Istio, Linkerd, and Consul. You'll be able to identify their specific functionalities, from traffic management, security, and certificate authority through to sidecar injections and observability. By the end of this book, you will have developed the skills you need to effectively manage modern microservices-based applications.
Table of Contents (31 chapters)
1
Section 1: Cloud-Native Application Management
4
Section 2: Architecture
8
Section 3: Building a Kubernetes Environment
10
Section 4: Learning about Istio through Examples
18
Section 5: Learning about Linkerd through Examples
24
Section 6: Learning about Consul through Examples

Summary

Applications can switch from being robust to resilient through the use of Linkerd's sidecar proxy, which provides adaptive load balancing, easy to understand debugging capabilities, timeouts, and retries. We explored each of these capabilities in this chapter.

With the help of a service profile through Kubernetes' custom resource definition, you can define routes to report aggregated metrics on unique requests based on defined patterns. The service profile name is a fully qualified name that can match with HTTP/2 :authority or HTTP1.X hosts. Linkerd's load balancing implementation is at the L7 (application streams) level instead of the default Kubernetes L4 (TCP connection) level. You can implement retry budgets to prevent retry storms from overwhelming backends.

The Linkerd dashboard or the Linkerd CLI can be used to observe the live traffic arriving in...