Book Image

Mastering Service Mesh

By : Anjali Khatri, Vikram Khatri
Book Image

Mastering Service Mesh

By: Anjali Khatri, Vikram Khatri

Overview of this book

Although microservices-based applications support DevOps and continuous delivery, they can also add to the complexity of testing and observability. The implementation of a service mesh architecture, however, allows you to secure, manage, and scale your microservices more efficiently. With the help of practical examples, this book demonstrates how to install, configure, and deploy an efficient service mesh for microservices in a Kubernetes environment. You'll get started with a hands-on introduction to the concepts of cloud-native application management and service mesh architecture, before learning how to build your own Kubernetes environment. While exploring later chapters, you'll get to grips with the three major service mesh providers: Istio, Linkerd, and Consul. You'll be able to identify their specific functionalities, from traffic management, security, and certificate authority through to sidecar injections and observability. By the end of this book, you will have developed the skills you need to effectively manage modern microservices-based applications.
Table of Contents (31 chapters)
1
Section 1: Cloud-Native Application Management
4
Section 2: Architecture
8
Section 3: Building a Kubernetes Environment
10
Section 4: Learning about Istio through Examples
18
Section 5: Learning about Linkerd through Examples
24
Section 6: Learning about Consul through Examples

Understanding the BookInfo application

In a traditional environment, you cannot have multiple versions of the same service up and running at the same time unless some routing is built at the application layer.

However, in the preceding example, we have three versions of the Reviews microservice up and running at the same time. Since this application is running within a Kubernetes environment with network service definitions, it is possible to have multiple versions of the same microservice up and running. However, the traffic to each microservice is random, and we don't know which microservice will be receiving the traffic.

You can think of it this way: you have a frontend web application already running stable but not using modern web UI capabilities. You want to enable another web UI frontend with a handful of customers without affecting others. This type of selective rollout...